Java 3d Design
Java 3d Design
Specification
JavaSoft
A Sun Microsystems, Inc. Business
901 San Antonio Road
Palo Alto, CA 94303 USA
415 960-1300 fax 415 969-9131
© 1997, 1998 Sun Microsystems, Inc.
901 San Antonio Road, Palo Alto, California 94303 U.S.A.
All rights reserved.
RESTRICTED RIGHTS LEGEND: Use, duplication, or disclosure by the United States
Government is subject to the restrictions set forth in DFARS 252.227-7013 (c)(1)(ii) and
FAR 52.227-19.
The release described in this document may be protected by one or more U.S. patents, for-
eign patents, or pending applications.
Sun Microsystems, Inc. (SUN) hereby grants to you a fully paid, nonexclusive, nontrans-
ferable, perpetual, worldwide limited license (without the right to sublicense) under
SUN’s intellectual property rights that are essential to practice this specification. This
license allows and is limited to the creation and distribution of clean-room implementa-
tions of this specification that (i) are complete implementations of this specification, (ii)
pass all test suites relating to this specification that are available from SUN, (iii) do not
derive from SUN source code or binary materials, and (iv) do not include any SUN binary
materials without an appropriate and separate license from SUN.
Java, JavaScript, and Java 3D are trademarks of Sun Microsystems, Inc. Sun, Sun Micro-
systems, the Sun logo, Java and HotJava are trademarks or registered trademarks of Sun
Microsystems, Inc. UNIX® is a registered trademark in the United States and other coun-
tries, exclusively licensed through X/Open Company, Ltd. All other product names men-
tioned herein are the trademarks of their respective owners.
THIS PUBLICATION IS PROVIDED “AS IS” WITHOUT WARRANTY OF ANY
KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
PURPOSE, OR NON-INFRINGEMENT.
THIS PUBLICATION COULD INCLUDE TECHNICAL INACCURACIES OR TYPO-
GRAPHICAL ERRORS. CHANGES ARE PERIODICALLY ADDED TO THE INFOR-
MATION HEREIN; THESE CHANGES WILL BE INCORPORATED IN NEW
EDITIONS OF THE PUBLICATION. SUN MICROSYSTEMS, INC. MAY MAKE
IMPROVEMENTS AND/OR CHANGES IN THE PRODUCT(S) AND/OR THE PRO-
GRAM(S) DESCRIBED IN THIS PUBLICATION AT ANY TIME.
Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
1 Introduction to Java 3D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1
1.2 Programming Paradigm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
1.2.1 The Scene Graph Programming Model . . . . . . . . . . . . . . .2
1.2.2 Rendering Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
1.2.3 Extensibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
1.3 High Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
1.3.1 Layered Implementation. . . . . . . . . . . . . . . . . . . . . . . . . . .4
1.3.2 Target Hardware Platforms . . . . . . . . . . . . . . . . . . . . . . . .4
1.4 Support for Building Applications and Applets . . . . . . . . . . . . . . . . . . .5
1.4.1 Browsers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
1.4.2 Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
1.5 Overview of Java 3D Object Hierarchy. . . . . . . . . . . . . . . . . . . . . . . . . .6
1.6 Structuring the Java 3D Program. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
1.6.1 Java 3D Application Scene Graph . . . . . . . . . . . . . . . . . . .7
1.6.2 Recipe for a Java 3D Program . . . . . . . . . . . . . . . . . . . . . .8
1.6.3 HelloUniverse: A Sample Java 3D Program . . . . . . . . . . .9
D Exceptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
D.1 BadTransformException. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
D.2 CapabilityNotSetException . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
D.3 DanglingReferenceException. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
D.4 IllegalRenderingStateException . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
D.5 IllegalSharingException . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
D.6 MismatchedSizeException . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
D.7 MultipleParentException . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
D.8 RestrictedAccessException . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
E Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
E.1 Fog Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .447
E.2 Lighting Equations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .448
E.3 Sound Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .450
E.3.1 Headphone Playback Equations . . . . . . . . . . . . . . . . . . .450
E.3.2 Speaker Playback Equations. . . . . . . . . . . . . . . . . . . . . .458
E.4 Texture Mapping Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .459
E.4.1 Texture Lookup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .459
E.4.2 Texture Application . . . . . . . . . . . . . . . . . . . . . . . . . . . .462
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
Figure 9-5 An Interpolator Set to Loop Infinitely and Mode Flags Set to Enable
Only the α-Increasing and α-at-1 Portion of the Waveform . . . . . . . . . 240
Figure 9-6 An Interpolator Set to Loop Infinitely and Mode Flags Set to Enable
Only the α-Decreasing and α-at-0 Portion of the Waveform. . . . . . . . . 240
Figure 9-7 An Interpolator Set to Loop Infinitely and Mode Flags Set to
Enable All Portions of the Waveform . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
Figure 9-8 How an α-Increasing Waveform Changes with Various Values of
increasingAlphaRampDuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
Figure 13-1 Minimal Immediate-Mode Structure. . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
Figure A-1 Math Object Hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
Figure B-1 A Generalized Triangle Strip . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
Figure B-2 A Generalized Triangle Mesh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
Figure B-3 Encoding of the Six Sextants of Each Octant of a Sphere . . . . . . . . . . . 391
Figure B-4 Bit Layout of Geometry Compression Commands . . . . . . . . . . . . . . . . . 396
Figure C-1 Display Rigidly Attached to the Tracker Base . . . . . . . . . . . . . . . . . . . . 417
Figure C-2 Display Rigidly Attached to the Head Tracker (Sensor). . . . . . . . . . . . . 420
Figure C-3 A Portion of a Scene Graph Containing a Single Screen3D Object . . . . 425
Figure C-4 A Single-Screen Display Environment . . . . . . . . . . . . . . . . . . . . . . . . . . 425
Figure C-5 A Portion of a Scene Graph Containing Three Screen3D Objects . . . . . 426
Figure C-6 A Three-Screen Display Environment . . . . . . . . . . . . . . . . . . . . . . . . . . 426
Figure C-7 The Camera-based View Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
Figure C-8 A Perspective Viewing Frustum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
Figure C-9 Perspective View Model Arguments. . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
Figure C-10 Orthographic View Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
Figure E-1 Signal to Only One Ear Is Direct . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
Figure E-2 Signals to Both Ears Are Indirect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
Figure E-3 ConeSound with a Single Distance Gain Attenuation Array . . . . . . . . . 454
Figure E-4 ConeSound with Two Distance Attenuation Arrays . . . . . . . . . . . . . . . . 454
T HIS document describes the Java 3D™ API and presents some details on the
implementation of the API. This specification is not intended as a programmer’s
guide. The programmer’s guide will be written after the specification has been
finalized.
This specification is written for 3D graphics application programmers. We
assume that the reader has at least a rudimentary understanding of computer
graphics. This includes familiarity with the essentials of computer graphics algo-
rithms as well as familiarity with basic graphics hardware and associated termi-
nology.
Related Documentation
This specification is intended to be used in conjunction with the Java 3D refer-
ence guide, an online, browser-accessible, javadoc-generated API reference.
Style Conventions
The following style conventions are used in this specification:
• Lucida type is used to represent computer code and the names of files and
directories.
• Bold Lucida type is used for Java 3D API declarations.
• Bold type is used to represent variables.
• Italic type is used for emphasis and for equations.
Programming Conventions
Java 3D uses the following programming conventions:
Acknowledgments
We gratefully acknowledge Warren Dale for writing the Sound API portion of
this specification, Daniel Petersen for writing the scene graph sharing portion of
the specification, and Bruce Bartlett for his assistance with the editing, format-
ting, and indexing of the specification.
We thank the Java 3D partners for their help in defining the Java 3D API. The
Java 3D partner companies include Silicon Graphics, Inc., Intel Corporation,
Apple Computer, Inc., and Sun Microsystems, Inc.
We also thank the many individuals and companies for their comments and sug-
gestions on the successive drafts of this specification.
Henry Sowizral
Kevin Rushforth
Michael Deering
Sun Microsystems
November 1997
1.1 Goals
Java 3D was designed with several goals in mind. Chief among them is high per-
formance. Several design decisions were made so that Java 3D implementations
can deliver the highest level of performance to application users. In particular,
when trade-offs were made, the alternative that benefited runtime execution was
chosen.
Other important Java 3D goals are to
Version 1.1 Alpha 01, February 27, 1998 1
1.2 Programming Paradigm INTRODUCTION TO JAVA 3D
1.2.3 Extensibility
Most Java 3D classes expose only accessor and mutator methods. Those methods
operate only on that object’s internal state, making it meaningless for an applica-
tion to override them. Therefore, Java 3D declares most methods as final.
Applications can extend Java 3D’s classes and add their own methods. However,
they may not override Java 3D’s scene graph traversal semantics because the
nodes do not contain explicit traversal and draw methods. Java 3D’s renderer
retains those semantics internally.
Java 3D does provide hooks for mixing Java 3D–controlled scene graph render-
ing and user-controlled rendering using Java 3D’s immediate mode constructs
(see Section 13.1.2, “Mixed-Mode Rendering”). Alternatively, the application
can stop Java 3D’s renderer and do all its drawing in immediate mode (see
Section 13.1.1, “Pure Immediate-Mode Rendering”).
Behaviors require applications to extend the Behavior object and to override its
methods with user-written Java code. These extended objects should contain ref-
erences to those scene graph objects that they will manipulate at run time.
Chapter 9, “Behaviors and Interpolators,” describes Java 3D’s behavior model.
1.4.1 Browsers
Today’s Internet browsers support 3D content by passing such data to plug-in 3D
viewers that render into their own window. It is anticipated that, over time, the
display of 3D content will become integrated into the main browser display. In
fact, some of today’s 3D browsers display 2D content as 2D objects within a 3D
world.
1.4.2 Games
Developers of 3D game software have typically attempted to wring out every last
ounce of performance from the hardware. Historically they have been quite will-
ing to use hardware-specific, nonportable optimizations to get the best perfor-
mance possible. As such, in the past, game developers have tended to program
below the level of easy-to-use software such as Java 3D. However, the trend in
javax.media.j3d
VirtualUniverse
Locale
View
PhysicalBody
PhysicalEnvironment
Screen3D
Canvas3D (extends awt.Canvas)
SceneGraphObject
Node
Group
Leaf
NodeComponent
Various component objects
Transform3D
javax.vecmath
Matrix classes
Tuple classes
VirtualUniverse Object
Locale Object
BG BG BranchGroup Nodes
while(true) {
Process input
If (request to exit) break
Perform Behaviors
Traverse the scene graph and render visible objects
}
Cleanup and exit
return objRoot;
}
public HelloUniverse() {
setLayout(new BorderLayout());
Canvas3D c = new Canvas3D(graphicsConfig);
add("Center", c);
// Create a simple scene and attach it to the virtual
// universe
BranchGroup scene = createSceneGraph();
UniverseBuilder u = new UniverseBuilder(c);
u.addBranchGraph(scene);
}
}
public UniverseBuilder(Canvas3D c) {
this.canvas = c;
vpTrans.addChild(vp);
vpRoot.addChild(vpTrans);
view.attachViewPlatform(vp);
public ColorCube() {
QuadArray cube = new QuadArray(24,
QuadArray.COORDINATES | QuadArray.COLOR_3);
cube.setCoordinates(0, verts);
cube.setColors(0, colors);
all the geometry defined by its descendants. Spatial grouping allows for efficient
implementation of operations such as proximity detection, collision detection,
view frustum culling, and occlusion culling.
Virtual Universe
Hi-Res Locales
BG BG BG BranchGroup Nodes
Leaf Nodes
The most common node object, along the path from the root to the leaf, that
changes the graphics state is the TransformGroup object. The TransformGroup
object can change the position, orientation, and scale of the objects below it.
Most graphics state attributes are set by a Shape3D leaf node through its constit-
uent Appearance object, thus allowing parallel rendering. The Shape3D node
also has a constituent Geometry object that specifies its geometry—this permits
different shape objects to share common geometry without sharing material
attributes (or vice versa).
2.1.3 Rendering
The Java 3D renderer incorporates all graphics state changes made in a direct
path from a scene graph root to a leaf object in the drawing of that leaf object.
Java 3D provides this semantic for both retained and compiled-retained modes.
bility bits that are explicitly enabled (set) prior to the object being compiled or
made live are legal. The methods for setting and getting capability bits are
described next.
Constructors
The SceneGraphObject specifies one constructor.
public SceneGraphObject()
Methods
The following methods are available on all scene graph objects.
The first method returns a flag that indicates whether the node is part of a scene
graph that has been compiled. If so, only those capabilities explicitly allowed by
the object’s capability bits are allowed. The second method returns a flag that
indicates whether the node is part of a scene graph that has been attached to a
virtual universe via a high-resolution Locale object.
These three methods provide applications with the means for accessing and mod-
ifying the capability bits of a scene graph object. The bit positions of the capabil-
ity bits are defined as public static final constants on a per-object basis. Every
instance of every scene graph object has its own set of capability bits. An exam-
ple of a capability bit is the ALLOW_BOUNDS_WRITE bit in node objects. Only those
methods corresponding to capabilities that are enabled before the object is first
compiled or made live are subsequently allowed for that object. A Restricte-
dAccessException is thrown if an application calls setCapability or clearCa-
pability on live or compiled objects. Note that only a single bit may be set or
cleared per method invocation—bits may not be ORed together.
These methods access or modify the userData field associated with this scene
graph object. The userData field is a reference to an arbitrary object and may be
used to store any user-specific data associated with this scene graph object—it is
not used by the Java 3D API. If this object is cloned, the userData field is copied
to the newly cloned object.
Constants
Node object constants allow an application to individually enable runtime capa-
bilities. These capability bits are enforced only when the node is part of a live or
compiled scene graph.
These bits, when set using the setCapability method, specify that the node will
permit an application to invoke the getBounds and setBounds methods, respec-
tively. An application can choose to enable a particular set method but not the
associated get method, or vice versa. The application can choose to enable both
methods or, by default, leave the method(s) disabled.
These bits, when set using the setCapability method, specify that the node will
permit an application to invoke the getBoundsAutoCompute and set-
BoundsAutoCompute methods, respectively. An application can choose to enable
a particular set method but not the associated get method, or vice versa. The
application can choose to enable both methods or, by default, leave the method(s)
disabled.
These flags specify that this Node can have its pickability read or changed.
This flag specifies that this Node will be reported in the collision SceneGraph-
Path if a collision occurs. This capability is only specifiable for Group nodes; it
is ignored for Leaf nodes. The default for Group nodes is false. All interior nodes
not needed for uniqueness in a SceneGraphPath that don’t have this flag set to
true will not be reported in the SceneGraphPath.
These flags specify that this Node allows read or write access to its collidability
state.
This flag specifies that this node allows read access to its local-coordinates-to-
virtual-world-(Vworld)-coordinates transform.
Constructors
The Node object specifies the following constructor.
public Node()
This constructor constructs and initializes a Node object. The Node class pro-
vides an abstract class for all group and leaf nodes. It provides a common frame-
work for constructing a Java 3D scene graph, specifically, bounding volumes.
Methods
The following methods are available on Node objects, subject to the capabilities
that are enabled for live or compiled nodes.
Retrieves the parent of this node, or null if this node has no parent. This method
is only valid during the construction of the scene graph. If this object is part of a
live or compiled scene graph, a RestrictedAccessException will be thrown.
These methods set and get the value that determines whether the node’s geomet-
ric bounds are computed automatically, in which case the bounds will be read-
only, or are set manually, in which case the value specified by setBounds will be
used. The default is automatic.
These methods set and retrieve the flag indicating whether this node can be
picked. A setting of false means that this node and its children are all unpick-
able.
The set method sets the collidable value. The get method returns the collidable
value. This value determines whether this node and its children, if a group node,
can be considered for collision purposes. If the value is false, neither this node
nor any children nodes will be traversed for collision purposes. The default value
is true. The collidable setting is the way that an application can perform collision
culling.
This method creates a new instance of the node. This routine is called by clon-
eTree to duplicate the current node. cloneNode should be overridden by any
user-subclassed objects. All subclasses must have their cloneNode method con-
sist of the following lines:
This method copies all the node information from the originalNode into the
current node. This method is called from the cloneNode method, which is in turn
called by the cloneTree method.
For each NodeComponent object contained by the object being duplicated, the
NodeComponent’s duplicateOnCloneTree value is used to determine whether
the NodeComponent should be duplicated in the new node or if just a reference
to the current node should be placed in the new node. This flag can be overridden
by setting the forceDuplicate parameter in the cloneTree method to true.
These methods duplicate all the nodes of the specified subgraph. Group nodes
are duplicated via a call to cloneNode, and then cloneTree is called for each
child node. For leaf nodes, component data can either be duplicated or be made
a reference to the original data. Leaf node cloneTree behavior is determined by
the duplicateOnCloneTree flag found in every leaf node’s component data class
and by the forceDuplicate parameter. The forceDuplicate parameter, when
set to true, causes the duplicateOnCloneTree flag to be ignored. The allow-
DanglingReferences flag, when set to true, allows the cloneTree method to
Constructors
The NodeComponent object specifies the following constructor.
public NodeComponent()
Methods
The following methods are available on NodeComponent objects.
This method copies all node information from originalNodeComponent into the
current node. This method is called from the cloneNodeComponent method,
which is in turn called by the cloneNode method.
Virtual Universe
Hi-Res Locale
BG
Physical Physical
Body Environment
lar Locale are all relative to the location of that Locale’s high-resolution coordi-
nates.
Virtual Universe
Hi-Res Locales
BG BG BG BranchGroup Nodes
Group Nodes
Leaf Nodes
A 256-bit fixed-point number also has the advantage of being able to directly
represent nearly any reasonable single-precision floating-point value exactly.
High-resolution coordinates in Java 3D are only used to embed more traditional
floating point coordinate systems within a much higher-resolution substrate. In
this way a visually seamless virtual universe of any conceivable size or scale can
be created, without worry about numerical accuracy.
between the location of their high-resolution Locale, and the view platform's
high-resolution Locale. (In the common case of the Locales being the same, no
translation is necessary.)
Constructors
The VirtualUniverse object has the following constructors.
public VirtualUniverse()
Methods
The VirtualUniverse object has the following methods.
The first method returns the Enumeration object of all Locales in this virtual uni-
verse. The numLocales method returns the number of Locales.
Constructors
The Locale object has the following constructors.
These three constructors create a new high-resolution Locale object in the speci-
fied VirtualUniverse. The first form constructs a Locale object located at
(0.0, 0.0, 0.0). The other two forms construct a Locale object using the specified
high-resolution coordinates. In the second form, the parameters x, y, and z are
arrays of eight 32-bit integers that specify the respective high-resolution coordi-
nate.
Methods
The Locale object has the following methods. For the Locale picking methods,
see Section 10.3.2, “BranchGroup Node and Locale Node Pick Methods.”
This method retrieves the virtual universe within which this Locale object is con-
tained.
The first three methods add, remove, and replace a branch graph in this Locale.
Adding a branch graph has the effect of making the branch graph “live.” The
fourth method retrieves the number of branch graphs in this Locale. The last
method retrieves an Enumeration object of all branch graphs.
bit coordinate value. Java 3D interprets the integer at index 0 as the 32 most sig-
nificant bits and the integer at index 7 as the 32 least significant bits.
Constructors
The HiResCoord object has the following constructors.
The first constructor generates the high-resolution coordinate point from three
integer arrays of length eight. The integer arrays specify the coordinate values
corresponding with their name. The second constructor creates a new high-reso-
lution coordinate point by cloning the high-resolution coordinates hc. The third
constructor creates new high-resolution coordinates with value (0.0, 0.0, 0.0).
Methods
These five methods modify the value of high-resolution coordinates this. The
first method resets all three coordinate values with the values specified by the
three integer arrays. The second method sets the value of this to that of high-
resolution coordinates hiRes. The third, fourth, and fifth methods reset the corre-
sponding coordinate of this.
These five methods retrieve the value of the high-resolution coordinates this.
The first method retrieves the high-resolution coordinates’ values and places
those values into the three integer arrays specified. All three arrays must have
length greater than or equal to eight. The second method updates the value of the
high-resolution coordinates hc to match the value of this. The third, fourth, and
fifth methods retrieve the coordinate value that corresponds to their name and
update the integer array specified, which must be of length eight or greater.
These methods scale a high-resolution coordinate point. The first method scales
h1 by the scalar value scale and places the scaled coordinates into this. The
second method scales this by the scalar value scale and places the scaled coor-
dinates back into this.
These two methods negate a high-resolution coordinate point. The first method
negates h1 and stores the result in this. The second method negates this and
stores its negated value back into this.
This method subtracts h1 from this and stores the resulting difference vector in
the double-precision floating-point vector v. Note that although the individual
high-resolution coordinate points cannot be represented accurately by double-
precision numbers, this difference vector between them can be accurately repre-
sented by doubles for many practical purposes, such as viewing.
This method performs an arithmetic comparison between this and h1. It returns
true if the two high-resolution coordinate points are equal; otherwise, it returns
false.
G ROUP nodes are the glue elements used in constructing a scene graph. The
following subsections list the seven group nodes (see Figure 4-1) and their defi-
nitions. All group nodes can have a variable number of child node objects—
including other group nodes as well as leaf nodes. These children have an asso-
ciated index that allows operations to specify a particular child. However, unless
one of the special ordered group nodes is used, the Java 3D renderer can choose
to render a group node’s children in whatever order it wishes (including render-
ing the children in parallel).
SceneGraphObject
Node
Group
BranchGroup
OrderedGroup
DecalGroup
SharedGroup
Switch
TransformGroup
Constants
These flags, when enabled using the setCapability method, specify that this
Group node will allow the following methods, respectively:
• numChildren, getChild, getAllChildren
• setChild, insertChild, removeChild
• addChild, moveTo
These capability bits are enforced only when the node is part of a live or com-
piled scene graph.
These flags, when enabled using the setCapability method, specify that this
Group node will allow reading and writing of its collision bounds.
Constructors
public Group()
Methods
The Group node class defines the following methods.
The first method returns a count of the number of children. The second method
returns the child at the specified index.
The first method replaces the child at the specified index with a new child. The
second method inserts a new child before the child at the specified index. The
third method removes the child at the specified index. Note that if this Group
node is part of a live or compiled scene graph, only BranchGroup nodes may be
added to or removed from it—and only if the appropriate capability bits are set.
This method adds a new child as the last child in the group. Note that if this
Group node is part of a live or compiled scene graph, only BranchGroup nodes
may be added to it—and only if the appropriate capability bits are set.
This method creates a new instance of the node. This routine is called by clon-
eTree to duplicate the current node.
This method copies all the node information from the originalNode into the
current node. This method is called from the cloneNode method, which is in turn
called by the cloneTree method.
For each NodeComponent object contained by the object being duplicated, the
NodeComponent’s duplicateOnCloneTree flag is used to determine whether the
NodeComponent should be duplicated in the new node or if just a reference to
the current node should be placed in the new node. This flag can be overridden
by setting the forceDuplicate parameter in the cloneTree method to true.
This method moves the specified BranchGroup node from its old location in the
scene graph to the end of this group, in an atomic manner. Functionally, this
method is equivalent to the following lines:
branchGroup.detach();
this.addChild(branchGroup);
These methods set and retrieve the collision bounding object for a node.
The set method causes this Group node to be reported as the collision target
when collision is being used and this node or any of its children is in a collision.
The default is false. This method tries to set the capability bit
Node.ENABLE_COLLISION_REPORTING. The get method returns the collision tar-
get state.
For collision with USE_GEOMETRY set, the collision traverser will check the
geometry of all the Group node’s leaf descendants. For collision with
USE_BOUNDS set, the collision traverser will check the bounds at this Group
node. In both cases, if there is a collision, this Group node will be reported as the
colliding object in the SceneGraphPath.
Virtual Universe
Hi-Res Locale
BG BranchGroup Node
BG
Can be reparented or
removed at run time
Constants
The BranchGroup class adds the following new constant.
This flag, when enabled using the setCapability method, allows this Branch-
Group node to be detached from its parent group node. This capability flag is
enforced only when the node is part of a live or compiled scene graph.
Methods
The BranchGroup class defines the following methods.
This method compiles the scene graph rooted at this BranchGroup and creates
and caches a newly compiled scene graph.
This method creates a new instance of the node. This routine is called by clone-
Tree to duplicate the current node.
This method copies all the node information from the originalNode into the
current node. This method is called from the cloneNode method, which is in turn
called by the cloneTree method.
For each NodeComponent object contained by the object being duplicated, the
NodeComponent’s duplicateOnCloneTree value is used to determine whether
the NodeComponent should be duplicated in the new node or if just a reference
to the current node should be placed in the new node. This flag can be overridden
by setting the forceDuplicate parameter in the cloneTree method to true.
The effects of transformations in the scene graph are cumulative. The concatena-
tion of the transformations of each TransformGroup in a direct path from the
Locale to a Leaf node defines a composite model transformation (CMT) that
takes points in that Leaf node’s local coordinates and transforms them into Vir-
tual World (Vworld) coordinates. This composite transformation is used to trans-
form points, normals, and distances into Vworld coordinates. Points are
transformed by the CMT. Normals are transformed by the inverse-transpose of
the CMT. Distances are transformed by the scale of the CMT. In the case of a
transformation containing a nonuniform scale or shear, the maximum scale value
in any direction is used. This ensures, for example, that a transformed bounding
sphere, which is specified as a point and a radius, continues to enclose all objects
that are also transformed using a nonuniform scale.
Constants
The TransformGroup class adds the following new flags.
These flags, when enabled using the setCapability method, allow this node’s
Transform3D to be read or written. They are only used when the node is part of
a live or compiled scene graph.
Constructors
public TransformGroup()
public TransformGroup(Transform3D t1)
These construct and initialize a new TransformGroup. The first form initializes
the node’s Transform3D to the identity transformation; the second form initial-
izes the node’s Transform3D to a copy of the specified transform.
Methods
The TransformGroup class defines the following methods.
These methods retrieve or set this node’s attached Transform3D object by copy-
ing the transform to or from the specified object.
The first method creates a new instance of the node. This method is called by
cloneTree to duplicate the current node. The second method copies all the node
information from the originalNode into the current node. This method is called
from the cloneNode method, which is in turn called by the cloneTree method.
For each NodeComponent object contained by the object being duplicated, the
NodeComponent’s duplicateOnCloneTree flag is used to determine whether the
NodeComponent should be duplicated in the new node or a reference to the cur-
rent node should be placed in the new node. This flag can be overridden by set-
ting the forceDuplicate parameter in the cloneTree method to true.
Methods
This method creates a new instance of the node. This routine is called by clone-
Tree to duplicate the current node.
This method copies all the node information from the originalNode into the
current node. This method is called from the cloneNode method, which is in turn
called by the cloneTree method.
For each NodeComponent object contained by the object being duplicated, the
NodeComponent’s duplicateOnCloneTree value is used to determine whether
the NodeComponent should be duplicated in the new node or if just a reference
to the current node should be placed in the new node. This flag can be overridden
by setting the forceDuplicate parameter in the cloneTree method to true.
The first child, at index 0, defines the surface on top of which all other children
are rendered. The geometry of this child must encompass all other children; oth-
erwise, incorrect rendering may result. The polygons contained within each of
the children must be facing the same way. If the polygons defined by the first
child are front facing, then all other surfaces should be front facing. In this case,
the polygons are rendered in order. The renderer can use knowledge of the copla-
nar nature of the surfaces to avoid Z-buffer collisions (for example, if the under-
lying implementation supports stenciling or polygon offset, then these techniques
may be employed). If the main surface is back facing, then all other surfaces
should be back facing and need not be rendered (even if back-face culling is dis-
abled).
Note that using the DecalGroup node does not guarantee that Z-buffer collisions
are avoided. An implementation of Java 3D may fall back to treating DecalGroup
node as an ordinary OrderedGroup node.
Methods
This method creates a new instance of the node. This routine is called by clone-
Tree to duplicate the current node.
This method copies all the node information from the originalNode into the
current node. This method is called from the cloneNode method, which is in turn
called by the cloneTree method.
For each NodeComponent object contained by the object being duplicated, the
NodeComponent’s duplicateOnCloneTree value is used to determine whether
the NodeComponent should be duplicated in the new node or if just a reference
to the current node should be placed in the new node. This flag can be overridden
by setting the forceDuplicate parameter in the cloneTree method to true.
Constants
These flags, when enabled using the setCapability method, allow reading and
writing of the values that specify the child-selection criteria. They are only used
when the node is part of a live or compiled scene graph.
These values, when used in place of a non-negative integer index value, indicate
which children of the Switch node are selected for rendering. A value of
CHILD_NONE indicates that no children are rendered. A value of CHILD_ALL indi-
cates that all children are rendered, effectively making this Switch node operate
as an ordinary Group node. A value of CHILD_MASK indicates that the childMask
BitSet is used to select the children that are rendered.
Constructors
public Switch()
public Switch(int whichChild)
public Switch(int whichChild, BitSet childMask)
These constructors initialize a new Switch node using the specified parameters.
The default values for those parameters not specified are as follows:
whichChild: CHILD_NONE
childMask: empty
Methods
The Switch node class defines the following methods.
These methods access or modify the index of the child that the Switch object will
draw. The value may be a non-negative integer, indicating a specific child, or it
may be one of the following constants: CHILD_NONE, CHILD_ALL, or CHILD_MASK.
If the specified value is out of range, then no children are drawn.
These methods access or modify the mask used to select the children that the
Switch object will draw when the whichChild parameter is CHILD_MASK. This
parameter is ignored during rendering if the whichChild parameter is a value
other than CHILD_MASK.
This method returns the currently selected child. If whichChild is out of range,
or is set to CHILD_MASK, CHILD_ALL, or CHILD_NONE, then null is returned.
This method creates a new instance of the node. This routine is called by clone-
Tree to duplicate the current node.
This method copies all the node information from the originalNode into the
current node. This method is called from the cloneNode method, which is in turn
called by the cloneTree method.
For each NodeComponent object contained by the object being duplicated, the
NodeComponent’s duplicateOnCloneTree value is used to determine whether
the NodeComponent should be duplicated in the new node or if just a reference
to the current node should be placed in the new node. This flag can be overridden
by setting the forceDuplicate parameter in the cloneTree method to true.
L EAF nodes define atomic entities such as geometry, lights, and sounds. The
leaf nodes and their associated meanings follow.
Constructors
public Leaf()
Methods
The Leaf node object defines the following methods.
This method is called by the cloneTree method (see Section 6.2, “Cloning Sub-
graphs”) after all nodes in the subgraph have been cloned. The user can query the
NodeReferenceTable object to determine if any nodes that the Leaf node refer-
ences have been duplicated by the cloneTree call and, if so, what the corre-
sponding Node is in the new subgraph. If a user extends a predefined Java 3D
object and adds a reference to another node, this method must be defined in order
to ensure proper operation of the cloneTree method. The first statement in the
Version 1.1 Alpha 01, February 27, 1998 49
5.1 Leaf Node LEAF NODE OBJECTS
This method duplicates all nodes of the specified subgraph. For group nodes, the
node is first duplicated via a call to cloneNode and then cloneTree is called for
each child node. For leaf nodes, component data can either be duplicated or be
made a reference to the original data. Leaf node cloneTree behavior is deter-
mined by the duplicateOnCloneTree flag found in every leaf node’s component
data class and by the forceDuplicate parameter.
SceneGraphObject
Node
Leaf
Background
Behavior
Predefined behaviors
BoundingLeaf
Clip
Fog
ExponentialFog
LinearFog
Light
AmbientLight
DirectionalLight
PointLight
SpotLight
Link
Morph
Shape3D
Sound
BackgroundSound
PointSound
ConeSound
Soundscape
ViewPlatform
Constants
The Shape3D node object defines the following flags.
These flags, when enabled using the setCapability method, allow reading and
writing of the Geometry and Appearance component objects and the collision
bounds, respectively. These capability flags are enforced only when the node is
part of a live or compiled scene graph.
Constructors
The Shape3D node object defines the following constructors.
The first form constructs and initializes a new Shape3D object with the specified
geometry and appearance components. The second form uses the specified
geometry and a null appearance component. The third form uses both a null
geometry component and a null appearance component. If the geometry compo-
nent is null, then no geometry is drawn. If the appearance component is null,
then default values are used for all appearance attributes.
Methods
The Shape3D node object defines the following methods.
These methods access or modify the Geometry component object associated with
this Shape3D node.
These methods set and retrieve the collision bounds for this node.
This method creates a new instance of the node. This routine is called by clone-
Tree to duplicate the current node. cloneNode should be overridden by any user-
subclassed objects. All subclasses must have their cloneNode method consist of
the following lines:
This method copies all the node information from the originalNode into the
current node. This method is called from the cloneNode method, which is in turn
called by the cloneTree method.
For each NodeComponent object contained by the object being duplicated, the
NodeComponent’s duplicateOnCloneTree flag is used to determine whether the
NodeComponent should be duplicated in the new node or if just a reference to
the current node should be placed in the new node. This flag can be overridden
by setting the forceDuplicate parameter in the cloneTree method to true.
This method is called by the cloneTree method (see Section 6.2, “Cloning Sub-
graphs”) after all nodes in the subgraph have been cloned. The user can query the
NodeReferenceTable object to determine if any nodes that the leaf node refer-
ences have been duplicated by the cloneTree call and, if so, what the corre-
sponding node is in the new subgraph. If a user extends a predefined Java 3D
object and adds a reference to another node, this method must be defined in order
to ensure proper operation of the cloneTree method. The first statement in the
user’s updateNodeReferences method must be super.updateNodeRefer-
ences(referenceTable). For predefined Java 3D nodes, this method will be
implemented automatically.
The NodeReferenceTable object is passed to the updateNodeReferences method
and allows references from the old subgraph to be translated into references in
the cloned subgraph. See Section 6.2.5, “NodeReferenceTable Object,” for more
details.
Constants
The BoundingLeaf node object defines the following flags.
Version 1.1 Alpha 01, February 27, 1998 53
5.4 Background Node LEAF NODE OBJECTS
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write the bounding region
object.
Constructors
The BoundingLeaf node object defines the following constructors.
public BoundingLeaf()
Public BoundingLeaf(Bounds region)
The first form constructs a BoundingLeaf node with a unit sphere region object.
The second form constructs a BoundingLeaf node with the specified bounding
region.
Methods
These methods set and retrieve the BoundingLeaf node’s bounding region.
Constants
The Background node object defines the following flags.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write the application region, the
image, the color, and the background geometry. These capability flags are
enforced only when the node is part of a live or compiled scene graph.
Constructors
The Background node object defines the following constructors.
public Background()
public Background(Color3f color)
public Background(float r, float g, float b)
public Background(ImageComponent2D image)
The first form constructs a Background leaf node with a default color of black
(0.0, 0.0, 0.0). The next two forms construct a Background leaf node with the
specified color. The final form constructs a Background leaf node with the spec-
ified 2D image.
Methods
The Background node object defines the following methods.
These two methods access or modify the background image. If the image is not
null then it is used in place of the color.
These two methods access or modify the Background geometry. The setGeome-
try method sets the background geometry to the specified BranchGroup node. If
non-null, this background geometry is drawn on top of the background color or
image using a projection matrix that essentially puts the geometry at infinity. The
geometry should be pretessellated onto a unit sphere.
These two methods access or modify the Background node’s application bounds.
This bounds is used as the application region when the application bounding leaf
is set to null. The getApplicationBounds method returns a copy of the associ-
ated bounds.
These two methods access or modify the Background node’s application bound-
ing leaf. When set to a value other than null, this bounding leaf overrides the
application bounds object and is used as the application region.
Constants
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write the application region and
the back distance. These capability flags are enforced only when the node is part
of a live or compiled scene graph.
Constructors
The Clip node object defines the following constructors.
The first constructor constructs a Clip leaf node with the rear clip plane at the
specified distance, in the local coordinate system, from the eye. The second con-
structor constructs a Clip leaf node with a default back clipping distance.
Methods
The Clip node object defines the following methods.
These methods access or modify the back clipping distances in the Clip node.
This distance specifies the back clipping plane in the local coordinate system of
the node.
These two methods access or modify the Clip node’s application bounds. This
bounds is used as the application region when the application bounding leaf is
set to null. The getApplicationBounds method returns a copy of the associated
bounds.
These two methods access or modify the Clip node’s application bounding leaf.
When set to a value other than null, this bounding leaf overrides the application
bounds object and is used as the application region.
will apply to all nodes in the virtual universe that are within the Fog node’s
region of influence.
If the regions of influence of multiple Fog nodes overlap, the Java 3D system
will choose a single set of fog parameters for those objects that lie in the inter-
section. This is done in an implementation-dependent manner, but in general, the
Fog node that is “closest” to the object is chosen.
Constants
The Fog node object defines the following flags.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read the region of influence, write the
region of influence, read color, and write color. These capability flags are
enforced only when the node is part of a live or compiled scene graph.
Constructors
The Fog node object defines the following constructors.
public Fog()
public Fog(float r, float g, float b)
public Fog(Color3f color)
These constructors each construct a new Fog node. The first constructor uses
default values for all parameters. The remaining constructors use the specified
parameters and use defaults for those parameters not specified. Default values are
as follows:
color: black (0,0,0)
list of scoping nodes: empty
influencingRegion: empty
Methods
The Fog node object defines the following methods.
These three methods access or modify the Fog node’s color. An application will
typically set this to the same value as the background color.
These methods access or modify the Fog node’s influencing bounds. This bounds
is used as the region of influence when the influencing bounding leaf is set to
null. The Fog node operates on all objects that intersect its region of influence.
The getInfluencingBounds method returns a copy of the associated bounds.
These methods access or modify the Fog node’s influencing bounding leaf.
When set to a value other than null, this overrides the influencing bounds object
and is used as the region of influence.
These methods access or modify the Fog node’s hierarchical scope. By default,
Fog nodes are scoped only by their regions of influence. These methods allow
them to be further scoped by a Group node in the hierarchy. The hierarchical
scoping of a Fog node cannot be accessed or modified if the node is part of a live
or compiled scene graph.
Constants
The ExponentialFog node object defines the following flags.
public static final int ALLOW_DENSITY_READ
public static final int ALLOW_DENSITY_WRITE
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write the density values. These
capability flags are enforced only when the node is part of a live or compiled
scene graph.
Constructors
The ExponentialFog node object defines the following constructors.
public ExponentialFog()
public ExponentialFog(float r, float g, float b)
public ExponentialFog(Color3f color)
public ExponentialFog(float r, float g, float b, float density)
public ExponentialFog(Color3f color, float density)
Each of these constructors creates a new ExponentialFog node. The first con-
structor uses default values for all parameters. The remaining constructors use
the specified parameters and use defaults for those parameters not specified.
Default values are as follows:
density: 1.0
Methods
The ExponentialFog node object defines the following methods.
These two methods access or modify the density in the ExponentialFog object.
The front and back fog distances are defined in the local coordinate system of the
node, but the actual fog equation will ideally take place in eye coordinates. For
more information on the fog equation, see Appendix E, “Equations.”
Constants
The LinearFog node object defines the following flags.
public static final int ALLOW_DISTANCE_READ
public static final int ALLOW_DISTANCE_WRITE
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write the distance values. These
capability flags are enforced only when the node is part of a live or compiled
scene graph.
Constructors
The LinearFog node object defines the following constructors.
public LinearFog()
public LinearFog(float r, float g, float b)
public LinearFog(Color3f color)
public LinearFog(float r, float g, float b, double frontDistance,
double backDistance)
public LinearFog(Color3f color, double frontDistance,
double backDistance)
These constructors each construct a new LinearFog node. The first constructor
uses default values for all parameters. The remaining constructors use the speci-
fied parameters and use defaults for those parameters not specified. Default val-
ues are as follows:
front distance: 0.1
back distance: 1.0
Methods
The LinearFog node object defines the following methods.
These four methods access or modify the front and back distances in the Linear-
Fog object. The front distance is the distance at which the fog starts obscuring
Version 1.1 Alpha 01, February 27, 1998 61
5.7 Light Node LEAF NODE OBJECTS
objects. The back distance is the distance at which the fog fully obscures objects.
Objects drawn closer than the front fog distance are not affected by fog. Objects
drawn farther than the back fog distance are drawn entirely in the fog color.
Constants
The Light node object defines the following flags.
These flags, when enabled using the setCapability method, allow reading and
writing of the region of influence, the state, and the color, respectively. These
capability flags are enforced only when the node is part of a live or compiled
scene graph.
Constructors
The Light node object defines the following constructors.
public Light()
public Light(Color3f color)
public Light(boolean lightOn, Color3f color)
Methods
The Light node object defines the following methods.
These methods access or modify the state of this light (that is, whether the light
is enabled).
These methods access or modify the Light node’s influencing bounds. This
bounds is used as the region of influence when the influencing bounding leaf is
set to null. The Light node operates on all objects that intersect its region of
influence. The getInfluencingBounds method returns a copy of the associated
bounds.
These methods access or modify the Light node’s influencing bounding leaf.
When set to a value other than null, this overrides the influencing bounds object
and is used as the region of influence.
These methods access or modify the Light node’s hierarchical scope. By default,
Light nodes are scoped only by their regions of influence bounds. These methods
allow them to be further scoped by a node in the hierarchy.
Constructors
The AmbientLight node defines the following constructors.
public AmbientLight()
public AmbientLight(Color3f color)
public AmbientLight(boolean lightOn, Color3f color)
The first constructor constructs and initializes a new AmbientLight node using
default parameters. The next two constructors construct and initialize a new
AmbientLight node using the specified parameters. The color parameter is the
color of the light source. The lightOn flag indicates whether this light is on or
off.
Constants
The DirectionalLight node object defines the following flags.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read or write the associated direction.
These capability flags are enforced only when the node is part of a live or com-
piled scene graph.
The DirectionalLight’s direction vector is defined in the local coordinate system
of the node.
Constructors
The DirectionalLight node object defines the following constructors.
public DirectionalLight()
Constructs and initializes a directional light. The default direction of the light is
toward the screen, along the negative z axis.
These constructors construct and initialize a directional light with the parameters
provided.
Methods
The DirectionalLight node object defines the following methods.
Constants
The PointLight node object defines the following flags.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read position, write position, read atten-
uation parameters, and write attenuation parameters. These capability flags are
enforced only when the node is part of a live or compiled scene graph.
Constructors
The PointLight Node defines the following constructors.
public PointLight()
Constructs and initializes a point light source with the default position at
0.0, 0.0, 0.0.
These constructors construct and initialize a point light with the specified param-
eters.
Methods
The PointLight node object defines the following methods.
These methods access or modify the point light’s current attenuation. The values
presented to the methods specify the coefficients of the attenuation polynomial,
with constant providing the constant term, linear providing the linear coeffi-
cient, and quadratic providing the quadratic coefficient.
Constants
The SpotLight node object defines the following flags.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write spread angle, concentra-
tion, and direction. These capability flags are enforced only when the node is
part of a live or compiled scene graph.
The SpotLight’s direction vector and spread angle are defined in the local coordi-
nate system of the node.
Constructors
The SpotLight node object defines the following constructors.
public SpotLight()
These construct and initialize a new spotlight with the parameters specified.
Methods
The SpotLight node object defines the following methods.
These methods access or modify the spread angle, in radians, of this spotlight.
Constants
The Sound object contains the following flags.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write the sound data, the initial
gain information, the loop information, the release flag, the continuous play flag,
the sound on/off switch, the scheduling region, the prioritization value, the dura-
tion information, and the sound playing information. These capability flags are
enforced only when the node is part of a live or compiled scene graph.
This constant defines a floating point value that denotes that no filter value is set.
Filters are described in Section 5.8.3, “ConeSound Node.”
This constant denotes that the sound’s duration could not be calculated. A fall-
back for getDuration of a non-cached sound.
Constructors
The Sound node object defines the following constructors.
public Sound()
Constructs and initializes a new Sound node object that includes the following
defaults for its fields:
sound data: null
initial gain: 1.0
loop: 0
release flag: false
continuous flag: false
on switch: false
scheduling region: null (cannot be scheduled)
priority: 1.0
Constructs and initializes a new Sound node object using the provided data and
gain parameter values, and defaults for all other fields. This constructor implic-
itly loads the sound data associated with this node if the implementation uses
sound caching.
Constructs and initializes a new Sound node object using the provided parameter
values.
Methods
The Sound node object defines the following methods.
These methods provide a way to associate different types of audio data with a
Sound node. This data can be cached (buffered) or noncached (unbuffered or
streaming). If the AudioDevice has been attached to the PhysicalEnvironment,
the sound data is made ready to begin playing. Certain functionality cannot be
applied to true sreaming sound data: sound duration is unknown, looping is dis-
abled, and the sound cannot be restarted. Furthermore, depending on the imple-
mentation of the AudioDevice used, streaming, non-cached data may not be fully
spatialized.
This gain is a scale factor that is applied to the sound data associated with this
sound source to increase or decrease its overall amplitude.
Data for nonstreaming sound (such as a sound sample) can contain two loop
points marking a section of the data that is to be looped a specific number of
times. Thus, sound data can be divided into three segments: the attack (before
the begin loop point), the sustain (between the begin and end loop points), and
the release (after the end loop point). If there are no loop begin and end points
defined as part of the sound data (say for Java Media Player types that do not
contain sound samples), then the begin loop point is set at the beginning of the
sound data, and the end loop point at the end of the sound data. If this is the case,
looping the sound means repeating the whole sound. However, these begin and
end loop points can be placed anywhere within the sound data, allowing a por-
tion in the middle of the sound to be looped.
A sound can be looped a specified number of times after it is activated and
before it is completed. The loop count value explicitly sets the number of times
70 Java 3D API Specification
LEAF NODE OBJECTS Sound Node 5.8
For some applications, it’s useful to turn a sound source “off” but to continue
“silently” playing the sound so that when it is turned back “on” the sound picks
up playing in the same location (over time) as it would have been if the sound
had never been disabled (turned off). Setting the continuous flag to true causes
the sound renderer to keep track of where (over time) the sound would be play-
ing even when the sound is disabled.
These two methods access or modify the Sound node’s scheduling bounds. This
bounds is used as the scheduling region when the scheduling bounding leaf is set
to null. A sound is scheduled for activation when its scheduling region inter-
sects the ViewPlatform’s activation volume. The getSchedulingBounds method
returns a copy of the associated bounds.
These two methods access or modify the Sound node’s scheduling bounding leaf.
When set to a value other than null, this bounding leaf overrides the scheduling
bounds object and is used as the scheduling region.
These methods access or modify the Sound node’s priority, which is used to rank
concurrently playing sounds in order of importance during playback. When more
sounds are started than the AudioDevice can handle, the Sound node with the
lowest priority ranking is deactivated. If a sound is deactivated (due to a sound
with a higher priority being started), it is automatically reactivated when
resources become available (for example, when a sound with a higher priority
finishes playing) or when the ordering of sound nodes is changed due to a change
in a Sound node’s priority.
Sounds with a lower priority than a sound that cannot be played due to a lack of
channels will be played. For example, assume we have eight channels available
for playing sounds. After ordering four sounds, we begin playing them in order,
checking if the number of channels required to play a given sound are actually
available before the sound is played. Furthermore, say the first sound needs three
channels to play, the second sound needs four channels, the third sound needs
three channels and the fourth sound needs only one channel. The first and sec-
onds sounds can be started because they require seven of the eight available
channels. The third sound cannot be audibly started because it requires three
channels and only one is still available. Consequently, the third sound starts play-
ing “silently.” The fourth sound can and will be started since it only requires one
channel. The third sound will be made audible when three channels become
available (i.e., when the first or second sound is finished playing).
Sounds given the same priority are ordered randomly. If the application wants a
specific ordering it must assign unique priorities to each sound.
Methods to determine what audio output resources are required for playback of a
Sound node on a particular AudioDevice and to determine the currently available
audio output resources are described in Chapter 11, “Audio Devices.”
These two methods access or modify the playing state of this sound (that is,
whether the sound is enabled). When enabled, the sound source is started and
thus can potentially be heard, depending on its activation state, gain control
parameters, continuation state, and spatialization parameters. If the continuous
state is true and the sound is not active, enabling the sound starts the sound
silently “playing” so that when the sound is activated, the sound is (potentially)
heard from somewhere in the middle of the sound data. The activation state can
change from active to inactive any number of times without stopping or starting
the sound. To restart a sound at the beginning of its data, re-enable the sound by
calling setEnable with a value of true.
Setting the enable flag to true during construction will act as a request to start
the sound playing “as soon as it can” be started. This could be close to immedi-
ately in limited cases, but several conditions, detailed below, must be meet for a
sound to be ready to be played.
This method retrieves the sound’s “ready” status denoting that the sound is fully
prepared for playing (either audibly or silently) to begin. Sound data associated
with a Sound node, either during construction (when the MediaContainer is
passed into the constructor as a parameter) or by calling setSoundData(), it can
be prepared to begin playing only after the following conditions are satisfied:
• The Sound node has non-null sound data associated with it
• The Sound node is live
• There is an active View in the Universe
• There is an initialized AudioDevice associated with the PhysicalEnviron-
ment.
Depending on the type of MediaContainer the sound data is and on the imple-
mentation of the AudioDevice used, sound data preparation could consist of
opening, attaching, loading, or copying into memory the associated sound data.
The query method, isReady()) returns true when the sound is fully prepro-
cessed so that it is playable (audibly if active, silently if not active).
A sound source will not be heard unless it is both enabled (turned on) and acti-
vated. While these two conditions are meet, the sound is potentially audible and
the method isPlaying() will return a status of true.
When the sound finishes playing its sound data (including all loops), it is implic-
itly disabled.
This method returns the sound’s silent status. If a sound is enabled before it is
activated it is begun playing silently. If a sound is enabled then deactivated while
playing it continues playing silently. In both of these cases isPlaying() returns
false but the method isPlayingSilently() returns true.
This method returns the length of time (in milliseconds) that the sound media
associated with the sound source could run (including the number of times its
loop section is repeated) if it plays to completion. If the sound media type is
streaming, or if the sound is looped indefinitely, then a value of –1 (implying
infinite length) is returned.
When a sound is started it could use more than one channel on the selected
AudioDevice it is to be played on. This method returns the number of channels
(on the executing audio device) being used by this sound. The method returns 0
if sound is not playing.
Constructors
The BackgroundSound node specifies the following constructor.
public BackgroundSound()
The first constructor constructs a new BackgroundSound node using only the
provided parameter values for the sound data and initial gain. The second con-
structor uses the provided parameter values for the sound data, initial gain, the
number of times the loop is looped, a flag denoting whether the sound data is
played to the end, a flag denoting whether the sound plays silently when dis-
abled, whether sound is switched on or off, the sound activation region, and a
priority value denoting the playback priority ranking.
Constants
The PointSound object contains the following flags.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write the position and the dis-
tance gain array. These capability flags are enforced only when the node is part
of a live or compiled scene graph.
Constructors
The PointSound node object defines the following constructors.
public PointSound()
Constructs a PointSound node object that includes the defaults for a Sound
object plus the following defaults for its own fields:
Position vector: (0.0, 0.0, 0.0)
Distance gain attenuation: null (no attenuation performed)
Both of these constructors construct a PointSound node object using only the
provided parameter values for sound data, sample gain, and position. The
remaining fields are set to the default values specified earlier. The first form uses
vectors as input for its position. The second form uses individual float parameters
for the elements of the position vector.
public PointSound(MediaContainer soundData, float initialGain,
int loopCount, boolean release, boolean continuous,
boolean enable, Bounds region, float priority,
Point3f position, Point2f distanceGain[])
public PointSound(MediaContainer soundData, float initialGain,
int loopCount, boolean release, boolean continuous,
boolean enable, Bounds region, float priority, float posX,
float posY, float posZ, Point2f distanceGain[])
public PointSound(MediaContainer soundData, float initialGain,
int loopCount, boolean release, boolean continuous,
boolean enable, Bounds region, float priority,
Point3f position, float attenuationDistance[],
float attenuationGain[])
public PointSound(MediaContainer soundData, float initialGain,
int loopCount, boolean release, boolean continuous,
boolean enable, Bounds region, float priority, float posX,
float posY, float posZ, float attenuationDistance[],
float attenuationGain[])
These four constructors construct a PointSound node object using the provided
parameter values. The first and third forms use points as input for the position.
The second and fourth forms use individual float parameters for the elements of
the position. The first and second forms accept an array of Point2f for the dis-
tance attenuation values where each pair in the array contains a distance and a
gain scale factor. The third and fourth forms accept separate arrays for the com-
ponents of distance attenuation, namely, the distance and gain scale factors. See
the description for the setDistanceGain method, below, for details on how the
separate arrays are interpreted.
Methods
The PointSound node object defines the following methods.
These methods set and retrieve the position in 3D space from which the sound
radiates.
These methods set and retrieve the sound’s distance attenuation. If this is not set,
no distance gain attenuation is performed (equivalent to using a gain scale factor
of 1.0 for all distances). See Figure 5-2. Gain scale factors are associated with
distances from the listener to the sound source via an array of distance and gain
scale factor pairs. The gain scale factor applied to the sound source is determined
by finding the range of values distance[i] and distance[i+1] that includes
the current distance from the listener to the sound source, then linearly interpo-
lating the corresponding values gain[i] and gain[i+1] by the same amount.
1.0
Scale Factor
0.5
0.0
0 10 20 30
Distance (from listener
to sound source)
If the distance from the listener to the sound source is less than the first distance
in the array, the first gain scale factor is applied to the sound source. This creates
a spherical region around the listener within which all sound gain is uniformly
scaled by the first gain in the array.
If the distance from the listener to the sound source is greater than the last dis-
tance in the array, the last gain scale factor is applied to the sound source.
The first form of setDistanceGain takes these pairs of values as an array of
Point2f. The second form accepts two separate arrays for these values. The dis-
tance and gainScale arrays should be of the same length. If the gainScale
array length is greater than the distance array length, the gainScale array ele-
ments beyond the length of the distance array are ignored. If the gainScale
array is shorter than the distance array, the last gainScale array value is
repeated to fill an array of length equal to distance array.
There are two methods for getDistanceGain, one returning an array of points,
the other returning separate arrays for each attenuation component.
Distance elements in this array of Point2f are a monotonically increasing set of
floating-point numbers measured from the location of the sound source. Gain
scale factor elements in this list of pairs can be any positive floating-point num-
bers. While for most applications this list of gain scale factors will usually be
monotonically decreasing, they do not have to be.
Figure 5-2 shows a graphical representation of a distance gain attenuation list.
The values given for distance/gain pairs would be
( (10.0, 1.0), (12.0, 0.9), (16.0, 0.5), (17.0, 0.3),
(20.0, 0.16), (24.0, 0.12), (28.0, 0.05), (30.0, 0.0) )
Thus if the current distance from the listener to the sound source is 22 units, a
scale factor of 0.14 would be applied to the sound amplitude. If the current dis-
tance from the listener to the sound source is less than 10 units, the scale factor
of 1.0 would be applied to the sound amplitude. If the current distance from the
listener to the sound source is greater than 30 units, the scale factor of 0.0 would
be applied to the sound amplitude.
The getDistanceGainLength method returns the length of the distance gain
attenuation arrays. Arrays passed into getDistanceGain methods should all be
at least this size.
Constants
The ConeSound object contains the following flags.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write the direction and the
angular attenuation array. These capability flags are enforced only when the node
is part of a live or compiled scene graph.
DistanceGain[0]
DistanceGain[1]
angularAttenuation[3]
angularAttenuation[0]
Attenuated Values
Constructors
The ConeSound node object defines the following constructors.
public ConeSound()
Constructs a ConeSound node object that includes the defaults for a PointSound
object plus the following defaults for its own fields:
Direction vector: (0.0, 0.0, 1.0)
Back attenuation: null
Angular attenuation: ((0.0, 1.0), NO_FILTER,(π/2, 0.0, NO_FILTER))
Both of these constructors construct a ConeSound node object using only the
provided parameter values for sound, overall initial gain, position, and direction.
The remaining fields are set to the default values listed earlier. The first form
uses points as input for its position and direction. The second form uses individ-
ual float parameters for the elements of the position and direction vectors.
The first form accepts arrays of points for the distance attenuation and angular
values. Each Point2f in the distanceAttenuation array contains a distance and
a gain scale factor. Each Point3f in the angularAttenuation array contains an
angular distance, a gain scale factor, and a filtering value (which is currently
defined as a simple cutoff frequency).
The second form accepts separate arrays for the distance and gain scale factor
components of distance attenuation, and separate arrays for the angular distance,
angular gain, and filtering components of angular attenuation. See the setDis-
tanceGain PointSound method for details on how the separate distance and
distanceGain arrays are interpreted. See the setAngularAttenuation Cone-
Sound method for details on how the separate angularDistance, angularGain,
and filter arrays are interpreted.
public ConeSound(MediaContainer soundData, float initialGain,
int loopCount, boolean release, boolean continuous,
boolean enable, Bounds region, float priority,
Point3f position, Point2f frontDistanceAttenuation[],
Point2f backDistanceAttenuation[], Vector3f direction,
Point3f angularAttenuation[])
public ConeSound(MediaContainer soundData, float initialGain,
int loopCount, boolean release, float priority,
boolean continuous, boolean enable, Bounds region,
float posX, float posY, float posZ, float frontDistance[],
float frontDistanceGain[], float backDistance[],
float backDistanceGain[], float dirX, float dirY,
float dirZ, float angle[], float angularGain[],
float frequencyCutoff[])
Methods
The ConeSound node object defines the following methods.
These methods set and retrieve the ConeSound’s two distance attenuation arrays.
If these are not set, no distance gain attenuation is performed (equivalent to using
a distance gain of 1.0 for all distances). If only one distance attenuation array is
set, spherical attenuation is assumed (see Figure 5-4). If both a front and back
distance attenuation are set, elliptical attenuation regions are defined (see
Figure 5-5). Use the PointSound setDistanceGain method to set the front dis-
tance attenuation array separately from the back distance attenuation array.
Listener
Angular Distances
Sound
Source Distances
Figure 5-4 ConeSound with a Single Distance Gain Attenuation Array
Listener
Gain scale factors are associated with distances from the listener to the sound
source via an array of distance and gain scale factor pairs (see Figure 5-2). The
gain scale factor applied to the sound source is the linear interpolated gain value
within the distance value range that includes the current distance from the lis-
tener to the sound source.
The getDistanceGainLength method (defined in PointSound) returns the length
of all distance gain attenuation arrays, including the back distance gain arrays.
Arrays passed into getBackDistanceGain methods should all be at least this size.
This value is the sound source’s direction vector. It is the axis from which angu-
lar distance is measured.
These methods set and retrieve the sound’s angular gain and filter attenuation
arrays. If these are not set, no angular gain attenuation or filtering is performed
(equivalent to using an angular gain scale factor of 1.0 and an angular filter of
NO_FILTER for all distances). This attenuation is defined as a triple of angular
distance, gain scale factor, and filter values. The distance is measured as the
angle in radians between the ConeSound’s direction vector and the vector from
the sound source position to the listener. Both the gain scale factor and filter
applied to the sound source are the linear interpolation of values within the dis-
tance value range that includes the angular distance from the sound source axis.
If the angular distance from the listener-sound-position vector and the sound’s
direction vector is less than the first distance in the array, the first gain scale fac-
tor and first filter are applied to the sound source. This creates a conical region
around the listener within which the sound is uniformly attenuated by the first
gain and the first filter in the array.
If the distance from the listener-sound-position vector and the sound’s direction
vector is greater than the last distance in the array, the last gain scale factor and
last filter are applied to the sound source.
Distance elements in this array of points are a monotonically increasing set of
floating point numbers measured from 0 to π radians. Gain scale factor elements
in this list of points can be any positive floating-point numbers. While for most
applications this list of gain scale factors will usually be monotonically decreas-
ing, they do not have to be. The filter (for now) is a single simple frequency cut-
off value.
In the first form of setAngularAttenuation, only the angular distance and
angular gain scale factor pairs are given. The filter values for these tuples are
implicitly set to NO_FILTER. In the second form of setAngularAttenuation, an
array of all three values is supplied.
The third form of setAngularAttenuation accepts three separate arrays for
these angular attenuation values. These arrays should be of the same length. If
the angularGain or filtering array length is greater than the angularDistance
array length, the array elements beyond the length of the angularDistance array
are ignored. If the angularGain or filtering array is shorter than the angu-
larDistance array, the last value of the short array is repeated to fill an array of
length equal to the angularDistance array.
The getAngularAttenuationArrayLength method returns the length of the
angular attenuation arrays. Arrays passed into getAngularAttenuation methods
should all be at least this size.
There are two methods for getAngularAttenuation, one returning an array of
points, the other returning separate arrays for each attenuation component.
Figure 5-3 shows an example of an angular attenuation defining four points of
the form (radiant distance, gain scale factor, cutoff filter frequency):
( (0.12, 0.8, NO_FILTER), (0.26, 0.6, 18000.0), (0.32, 0.4, 15000.0),
The reverberation attributes for these two regions could be set to represent their
physical differences so that active sounds are rendered differently depending on
which region the listener is in.
Constants
The Soundscape node object defines the following flags.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write the application region and
the aural attributes. These capability flags are enforced only when the node is
part of a live or compiled scene graph.
Constructors
The Soundscape node object defines the following constructors.
public Soundscape()
Constructs a Soundscape node object that includes the following defaults for its
elements:
application region: null (no active region)
aural attributes: null (uses default aural attributes)
This method constructs a Soundscape node object using the specified application
region and aural attributes.
Methods
The Soundscape node object defines the following methods.
These two methods access or modify the Soundscape node’s application bounds.
This bounds is used as the application region when the application bounding leaf
is set to null. The aural attributes associated with this Soundscape are used to
render the active sounds when this application region intersects the
ViewPlatform’s activation volume. The getApplicationBounds method returns
a copy of the associated bounds.
These two methods access or modify the Soundscape node’s application bound-
ing leaf. When set to a value other than null, this bounding leaf overrides the
application bounds object and is used as the application region.
These two methods access or modify the aural attributes of this Soundscape. Set-
ting it to null results in default attribute use.
Constants
The ViewPlatform node object defines the following flags.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write the view attach policy.
These capability flags are enforced only when the node is part of a live or com-
piled scene graph.
Methods
The ViewPlatform node object defines the following methods:
The activation radius defines an activation volume surrounding the center of the
ViewPlatform. This activation volume intersects with the scheduling regions and
application regions of other leaf node objects to determine which of those objects
may affect rendering.
88 Java 3D API Specification
LEAF NODE OBJECTS Morph Node 5.12
Different leaf objects interact with the ViewPlatform’s activation volume differ-
ently. The Background, Clip, and Soundscape leaf objects each define a set of
attributes and an application region in which those attributes are applied. If more
than one node of a given type (Background, Clip, or Soundscape) intersects the
ViewPlatform’s activation volume, the “most appropriate” node is selected.
Sound leaf objects begin playing their associated sounds when their scheduling
region intersects a ViewPlatform’s activation volume. Multiple sounds may be
active at the same time.
Behavior objects act somewhat differently. Those Behavior objects with schedul-
ing regions that intersect a ViewPlatform’s activation volume become candidates
for scheduling. Effectively, a ViewPlatform’s activation volume becomes an
additional qualifier on the scheduling of all Behavior objects. See Chapter 9,
“Behaviors and Interpolators,” for more details.
The view attach policy determines how Java 3D places the user’s virtual eye
point as a function of head position. See Section 8.4.3, “View Attach Policy,” for
details.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write the GeometryArrays,
appearance, weights, and collision Bounds components.
Constructors
The Morph node specifies the following constructors.
The first form constructs and initializes a new Morph leaf node with the speci-
fied array of GeometryArray objects and a null Appearance object. The second
form uses the specified array of GeometryArray objects and the specified
Appearance object. The length of the geometryArrays parameter determines the
number of weighted geometry arrays in this Morph node. If geometryArrays is
null, then a NullPointerException is thrown. If the Appearance component is
null, then default values are used for all appearance attributes.
Methods
The Morph node specifies the following methods.
This method sets the array of GeometryArray objects in the Morph node. Each
GeometryArray component specifies colors, normals, and texture coordinates.
The length of the geometryArrays parameter must be equal to the length of the
array with which this Morph node was created; otherwise, an Illegal-
ArgumentException is thrown.
LEAF NODE OBJECTS Link Node 5.13
This method retrieves a single geometry array from the Morph node. The index
parameter specifies which array is returned.
These methods set and retrieve the Appearance component of this Morph node.
The Appearance component specifies material, texture, texture environment,
transparency, or other rendering parameters. Setting it to null results in default
attribute use.
These methods set and retrieve the morph weight vector component of this
Morph node. The Morph node “weights” the corresponding GeometryArray by
the amount specified. The length of the weights parameter must be equal to the
length of the array with which this Morph node was created; otherwise, an Ille-
galArgumentException is thrown.
These methods set and retrieve the collision bounding object of this node.
JAVA 3D provides application programmers with two different means for reus-
ing scene graphs. First, multiple scene graphs can share a common subgraph.
Second, the node hierarchy of a common subgraph can be cloned, while still
sharing large component objects such as geometry and texture objects. In the first
case, changes in the shared subgraph affect all scene graphs that refer to the
shared subgraph. In the second case, each instance is unique—a change in one
instance does not affect any other instance.
Virtual Universe
Hi-Res Locale
BG BG BranchGroup Nodes
L Link Nodes
L
SG SharedGroup Node
• Background
• BoundingLeaf
• Behavior
• Clip
• Fog
• Soundscape
• ViewPlatform
Methods
The SharedGroup node defines the following methods.
This method compiles the source SharedGroup associated with this object and
creates and caches a newly compiled scene graph.
This method creates a new instance of the node. This routine is called by clone-
Tree to duplicate the current node.
This method copies all the node information from the originalNode into the
current node. This method is called from the cloneNode method, which is in turn
called by the cloneTree method.
For each NodeComponent object contained by the object being duplicated, the
NodeComponent’s duplicateOnCloneTree value is used to determine whether
the NodeComponent should be duplicated in the new node or if just a reference
to the current node should be placed in the new node. This flag can be overridden
by setting the forceDuplicate parameter in the cloneTree method to true.
Constants
The Link node object defines two flags.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write the SharedGroup node
pointed to by this Link node. These capability flags are enforced only when the
node is part of a live or compiled scene graph.
Constructors
The Link node object defines two constructors.
public Link()
public Link(SharedGroup sharedGroup)
The first form constructs a Link node object that does not yet point to a
SharedGroup node. The second form constructs a Link node object that points to
the specified SharedGroup node.
Methods
The Link node object defines two methods.
These methods access and modify the SharedGroup node associated with this
Link leaf node.
Java 3D provides the cloneTree method for this purpose. The cloneTree
method allows the programmer to change some attributes (NodeComponent
objects) in a scene graph, while at the same time sharing the majority of the
scene graph data—the geometry.
Methods
These methods start the cloning of the subgraph. The optional forceDuplicate
parameter, when set to true, causes leaf NodeComponent objects to ignore their
duplicateOnCloneTree value and always be duplicated (see Section 6.2.1,
“References to Node Component Objects”). The allowDanglingReferences
parameter, when set to true, will permit the cloning of a subgraph even when a
dangling reference is generated (see Section 6.2.3, “Dangling References”). Set-
ting forceDuplicate and allowDanglingReferences to false is the equivalent
of calling cloneTree without any parameters. This will result in NodeCompo-
nent objects being either duplicated or referenced in the cloned node, based on
their duplicateOnCloneTree value. A DanglingReferenceException will be
thrown if a dangling reference is encountered.
When the cloneTree method is called on a node, that node is duplicated along
with its entire internal state. If the node is a Group node, cloneTree is then
called on each of the node’s children.
The cloneTree method cannot be called on a live or compiled scene graph.
Figure 6-2 shows two instances of NodeComponent objects that are shared and
one NodeComponent element that is duplicated for the cloned subgraph.
G G
Group Nodes
cloneTree
Leaf Nodes
Lf Lf Lf Lf Lf Lf
NodeComponents
Methods
These methods set a flag that controls whether a NodeComponent object is dupli-
cated or referenced on a call to cloneTree. By default this flag is false, mean-
ing that the NodeComponent object will not be duplicated on a call to
cloneTree—newly created leaf nodes will refer to the original NodeComponent
object instead.
If the cloneTree method is called with the forceDuplicate parameter set to
true, the duplicateOnCloneTree flag is ignored and the entire scene graph is
duplicated.
refer to the node in the original subgraph—a situation that is most likely incor-
rect (see Figure 6-3).
G G
N1 N2
cloneTree
Lf Lf Lf1 Lf Lf Lf2
G G
N1 N2
cloneTree
Lf Lf Lf1 Lf Lf Lf2
Methods
This Leaf node method is called by the cloneTree method after all nodes in the
subgraph have been cloned. The user can query the NodeReferenceTable object
(see Section 6.2.5, “NodeReferenceTable Object”) to determine if any nodes that
the leaf node references have been duplicated by the cloneTree call and, if so,
what the corresponding node is in the new subgraph. If a user extends a pre-
defined Java 3D object and adds a reference to another node, this method must
be defined in order to ensure proper operation of the cloneTree method. The
first statement in the user’s updateNodeReferences method must be
super.updateNodeReferences(referenceTable). For predefined Java 3D
nodes, this method will be implemented automatically.
The NodeReferenceTable object is passed to the updateNodeReferences method
and allows references from the old subgraph to be translated into refer-ences in
the cloned subgraph. The translation is performed by the getNew-
NodeReference method.
This method takes a reference to the node in the original subgraph as an input
parameter and returns a reference to the equivalent node in the just-cloned sub-
graph. If the equivalent node in the cloned subgraph does not exist, either an
exception is thrown or a reference to the original node is returned (see
Section 6.2.3, “Dangling References”).
G
cloneTree
Lf
NodeComponent cloneNodeComponent();
void duplicateNodeComponent(NodeComponent nc);
Methods
This method takes a reference to the node in the original subgraph as an input
parameter and returns a reference to the equivalent node in the just-cloned sub-
graph. If the equivalent node in the cloned subgraph does not exist, either an
exception is thrown or a reference to the original node is returned (see
Section 6.2.3, “Dangling References”).
objectMat.mul(objectMat, rotMat);
t.set(objectMat);
objectTransform.setTransform(t);
RotationBehavior r =
new RotationBehavior(objectTransform, w);
r.duplicateNode(this, forceDuplicate);
return r;
}
// duplicateNode is needed to duplicate all super class
// data as well as all user data.
public void duplicateNode(Node n, boolean forceDuplicate) {
super.duplicateNode(n, forceDuplicate);
// Nothing to do here - all unique data was handled
// in the constructor in the cloneNode routine.
}
Constants
The Appearance component object defines the following flags.
SceneGraphObject
NodeComponent
Appearance
AuralAttributes
ColoringAttributes
LineAttributes
PointAttributes
PolygonAttributes
RenderingAttributes
TextureAttributes
TransparencyAttributes
Material
MediaContainer
TexCoordGeneration
Texture
Texture2D
Texture3D
ImageComponent
ImageComponent2D
ImageComponent3D
DepthComponent
DepthComponentFloat
DepthComponentInt
DepthComponentNative
Bounds
BoundingBox
BoundingPolytope
BoundingSphere
Transform3D
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that read and write the specified component object refer-
ence (material, texture, texture coordinate generation, and so forth). These
capability flags are enforced only when the object is part of a live or compiled
scene graph.
Constructors
The Appearance object has the following constructor.
public Appearance()
Methods
The Appearance object has the following methods.
The Material object specifies the desired material properties used for lighting.
Setting it to null disables lighting.
The Texture object specifies the desired texture map and texture parameters. Set-
ting it to null disables texture mapping.
These methods set and retrieve the TextureAttributes object. Setting it to null
results in default attribute use.
These methods set and retrieve the ColoringAttributes object. Setting it to null
results in default attribute use.
These methods set and retrieve the RenderingAttributes object. Setting it to null
results in default attribute use.
These methods set and retrieve the PolygonAttributes object. Setting it to null
results in default attribute use.
These methods set and retrieve the LineAttributes object. Setting it to null
results in default attribute use.
These methods set and retrieve the PointAttributes object. Setting it to null
results in default attribute use.
These methods set and retrieve the TexCoordGeneration object. Setting it to null
disables texture coordinate generation.
This method creates a new Appearance object. The method is called from a leaf
node’s duplicateNode method.
This method copies the information found in originalNode to the current node.
This routine is called as part of the cloneTree operation.
Constants
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write its color component and
shade model component information.
Constructors
public ColoringAttributes()
public ColoringAttributes(Color3f color, int shadeModel)
public ColoringAttributes(float red, float green, float blue,
int shadeModel)
Methods
These methods set and retrieve the intrinsic color of this ColoringAttributes com-
ponent object. This color is used when lighting is disabled or when the Material
is null.
These methods set and retrieve the shade model for this ColoringAttributes com-
ponent object. The shade model is one of the following:
• FASTEST: Uses the fastest available method for shading.
• NICEST: Uses the nicest (highest quality) available method for shading.
• SHADE_FLAT: Does not interpolate color across the primitive.
• SHADE_GOURAUD: Smoothly interpolates the color at each vertex
across the primitive.
This method creates a new ColoringAttributes object. This method is called from
a leaf node’s duplicateNode method.
This method copies the information found in originalNode to the current node.
This method is called as part of the cloneTree operation.
Constants
The LineAttributes object specifies the following variables.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that read and write its individual component field infor-
mation.
Draws a dashed line. Ideally, this will be drawn with a repeating pattern of eight
pixels on and eight pixels off.
Draws a dotted line. Ideally, this will be drawn with a repeating pattern of one
pixel on and seven pixels off.
Draws a dashed-dotted line. Ideally, this will be drawn with a repeating pattern
of seven pixels on, four pixels off, one pixel on, and four pixels off.
Constructors
public LineAttributes()
public LineAttributes(float lineWidth, int linePattern,
boolean lineAntialiasing)
The first constructor creates a LineAttributes object with default values. The sec-
ond constructor creates a LineAttributes object with specified values of line
width, pattern, and whether antialiasing is enabled or disabled.
Methods
These methods respectively set and retrieve the line width, in pixels, for this Lin-
eAttributes component object.
These methods respectively set and retrieve the line pattern for this LineAt-
tributes component object. The linePattern value describes the line pattern to
be used, which is one of the following: PATTERN_SOLID, PATTERN_DASH,
PATTERN_DOT, or PATTERN_DASH_DOT.
The set method enables or disables line antialiasing for this LineAttributes com-
ponent object. The get method retrieves the state of the line antialiasing flag.
The flag is true if line antialiasing is enabled, false if line antialiasing is dis-
abled.
The first method creates a new LineAttributes object; this method is called from
a leaf node’s duplicateNode method. The second method copies the information
found in originalNode to the current node; this method is called as part of the
cloneTree operation.
Constants
The PointAttributes object specifies the following variables.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that read and write its individual component field infor-
mation.
Constructors
public PointAttributes()
public PointAttributes(float pointSize,
boolean pointAntialiasing)
Methods
These methods set and retrieve the point size, in pixels, for this Appearance com-
ponent object.
The set method enables or disables point antialiasing for this PointAttributes
component object. The get method retrieves the state of the point antialiasing
flag. The flag is true if point antialiasing is enabled, false if point antialiasing
is disabled.
The first method creates a new PointAttributes object; this method is called from
a leaf node’s duplicateNode method. The second method copies the information
found in originalNode to the current node; this method is called as part of the
cloneTree operation.
Constants
The PolygonAttributes object specifies the following variables.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that read and write its individual component field infor-
mation.
Constructors
public PolygonAttributes()
public PolygonAttributes(int polygonMode, int cullFace,
float polygonOffset)
Methods
These methods set and retrieve the face culling flag for this PolygonAttributes
component object. The face culling flag is one of the following:
• CULL_NONE: Performs no face culling.
• CULL_FRONT: Culls all front-facing polygons.
• CULL_BACK: Culls all back-facing polygons.
These methods set and retrieve the polygon rasterization mode for this Appear-
ance component object. The polygon rasterization mode is one of the following:
• POLYGON_POINT: Renders polygonal primitives as points drawn at the
vertices of the polygon.
• POLYGON_LINE: Renders polygonal primitives as lines drawn between
consecutive vertices of the polygon.
• POLYGON_FILL: Renders polygonal primitives by filling the interior of
the polygon.
These methods set and retrieve the polygon offset. This screen-space offset is
added to the final, device-coordinate Z value of polygon primitives.
The first method creates a new PolygonAttributes object; this method is called
from a leaf node’s duplicateNode method. The second method copies the infor-
mation found in originalNode to the current node; this method is called as part
of the cloneTree operation.
Constants
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write its individual test value
and function information.
Constructors
public RenderingAttributes()
public RenderingAttributes(boolean depthBufferEnable,
boolean depthBufferWriteEnable, float alphaTestValue,
int alphaTestFunction)
Methods
These methods set and retrieve the depth buffer enable flag for this RenderingAt-
tributes component object. The flag is true if the depth buffer mode is enabled,
false if disabled.
These methods set and retrieve the depth buffer write enable flag for this Render-
Attributes component object. The flag is true if the depth buffer mode is writ-
able, false if the depth buffer is read-only.
These methods set and retrieve the alpha test value used by the alpha test func-
tion. This value is compared to the alpha value of each rendered pixel.
These methods set and retrieve the alpha test function. The alpha test function is
one of the following:
• ALWAYS: Indicates pixels are always drawn irrespective of the alpha
value. This effectively disables alpha testing.
• NEVER: Indicates pixels are never drawn irrespective of the alpha value.
• EQUAL: Indicates pixels are drawn if the pixel alpha value is equal to the
alpha test value.
• NOT_EQUAL: Indicates pixels are drawn if the pixel alpha value is not
equal to the alpha test value.
• LESS: Indicates pixels are drawn if the pixel alpha value is less than the
alpha test value.
• LESS_OR_EQUAL: Indicates pixels are drawn if the pixel alpha value is
less than or equal to the alpha test value.
• GREATER: Indicates pixels are drawn if the pixel alpha value is greater
than the alpha test value.
• GREATER_OR_EQUAL: Indicates pixels are drawn if the pixel alpha
value is greater than or equal to the alpha test value.
118 Java 3D API Specification
NODE COMPONENT OBJECTS TextureAttributes Object 7.1.7
The first method creates a new RenderingAttributes object; this method is called
from a leaf node’s duplicateNode method. The second method copies the infor-
mation found in originalNode to the current node; this method is called as part
of the cloneTree operation.
Constants
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write its individual component
field information.
Constructors
public TextureAttributes()
public TextureAttributes(int textureMode, Transform3D transform,
Color4f textureBlendColor, int perspCorrectionMode)
Methods
These methods set and retrieve the texture mode parameter for this Texture-
Attributes component object. The texture mode is one of the following:
• MODULATE: Modulates the object color with the texture color.
• DECAL: Applies the texture color to the object as a decal.
• BLEND: Blends the texture blend color with the object color.
• REPLACE: Replaces the object color with the texture color.
Version 1.1 Alpha 01, February 27, 1998 119
7.1.8 TransparencyAttributes Object NODE COMPONENT OBJECTS
These methods set and retrieve the texture blend color for this TextureAttributes
component object. The texture blend color is used when the texture mode param-
eter is BLEND.
These methods set and retrieve the texture transform object used to transform
texture coordinates. A copy of the specified Transform3D object is stored in this
TextureAttributes object.
These methods set and retrieve the perspective correction mode to be used for
color and texture coordinate interpolation. The perspective correction mode is
one of the following:
• NICEST: Uses the nicest (highest quality) available method for texture
mapping perspective correction.
• FASTEST: Uses the fastest available method for texture mapping
perspective correction.
The first method creates a new TextureAttributes object; this method is called
from a leaf node’s duplicateNode method. The second method copies the infor-
mation found in originalNode to the current node; this method is called as part
of the cloneTree operation.
Constants
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write its individual component
field information.
Constructors
public TransparencyAttributes()
public TransparencyAttributes(int tMode, float tVal)
Methods
These methods set and retrieve the transparency mode for this Appearance com-
ponent object. The transparency mode is one of the following:
• FASTEST: Uses the fastest available method for transparency.
• NICEST: Uses the nicest available method for transparency.
• SCREEN_DOOR: Uses screen-door transparency. This is done using an
on/off stipple pattern in which the percentage of transparent pixels is
approximately equal to the value specified by the transparency parameter.
• BLENDED: Uses alpha blended transparency. A blend equation of
(alpha*src + (1 – alpha)*dst) is used, where alpha is (1 – transparency).
• NONE: No transparency; opaque object.
These methods set and retrieve this Appearance object’s transparency value. The
transparency value is in the range [0.0, 1.0], with 0.0 being fully opaque and 1.0
being fully transparent.
Constants
The Material object defines two flags.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that respectively read and write its individual component
field information.
Constructors
The Material object has the following constructors.
public Material()
Constructs and initializes a Material object using default values for all attributes.
The default values are as follows:
ambient color: 0.2, 0.2, 0.2
emissive color: black (0.0, 0.0, 0.0)
diffuse color: white (1.0, 1.0, 1.0)
specular color: white (1.0, 1.0, 1.0)
shininess: 64.0
Constructs and initializes a new Material object using the specified parameters.
The ambient color, emissive color, diffuse color, specular color, and shininess
parameters are specified.
Methods
The Material object has the following methods.
This parameter specifies this material’s ambient color, that is, how much ambient
light is reflected by the material’s surface.
This parameter specifies the color of light, if any, that the material emits. This
color is added to the color produced by applying the lighting equation.
This parameter specifies the color of the material when illuminated by a light
source. In addition to the diffuse color (red, green, and blue), the alpha value is
used to specify transparency such that transparency = (1 – alpha).
These methods set and retrieve the current state of the lighting enable flag (true
or false) for this Appearance component object.
The first method creates a new Material object; this method is called from a leaf
node’s duplicateNode method. The second method copies the information
found in originalNode to the current node; this method is called as part of the
cloneTree operation.
This method returns a string representation of this Material’s values. If the scene
graph is live, only those values with their capability bit set will be displayed.
Constants
The Texture object defines the following flags:
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that read, and in some cases write, its individual compo-
nent field information.
Constructors
The Texture object has the following constructor.
public Texture()
This constructor is not very useful as the default width and height are 0. The
other default values are as follows:
boundaryModeS: WRAP
boundaryModeT: WRAP
minification filter: BASE_LEVEL_POINT
magnification filter: BASE_LEVEL_POINT
boundary color: black (0,0,0,0)
texture image: null
Methods
The Texture object has the following methods.
These parameters specify the boundary mode for the S and T coordinates in this
Texture object. The boundary mode is as follows:
• CLAMP: Clamps texture coordinates to be in the range [0, 1]. A constant
boundary color is used for U,V values that fall outside this range.
• WRAP: Repeats the texture by wrapping texture coordinates that are
outside the range [0, 1]. Only the fractional portion of the texture
coordinates is used; the integer portion is discarded.
This parameter specifies the minification filter function. This function is used
when the pixel being rendered maps to an area greater than one texel. The mini-
fication filter is one of the following:
• FASTEST: Uses the fastest available method for processing geometry.
• NICEST: Uses the nicest available method for processing geometry.
• BASE_LEVEL_POINT: Selects the nearest texel in the level 0 texture
map.
• BASE_LEVEL_LINEAR: Performs a bilinear interpolation on the four
nearest texels in the level 0 texture map.
• MULTI_LEVEL_POINT: Selects the nearest texel in the nearest mipmap.
• MULTI_LEVEL_LINEAR: Performs trilinear interpolation of texels
between four texels each from the two nearest mipmap levels.
This parameter specifies the magnification filter function. This function is used
when the pixel being rendered maps to an area less than or equal to one texel.
The value is one of the following:
• FASTEST: Uses the fastest available method for processing geometry.
• NICEST: Uses the nicest available method for processing geometry.
• BASE_LEVEL_POINT: Selects the nearest texel in the level 0 texture
map.
• BASE_LEVEL_LINEAR: Performs a bilinear interpolation on the four
nearest texels in the level 0 texture map.
These methods set and retrieve a specified mipmap level. Level 0 is the base
level.
This parameter specifies the texture boundary color for this Texture object. The
texture boundary color is used when boundaryModeS or boundaryModeT is set to
CLAMP.
These methods set and retrieve the state of texture mapping for this Texture
object. A value of true means that texture mapping is enabled, false means that
texture mapping is disabled.
public final void setMipMapMode(int mipmapMode)
public final int getMipMapMode()
These methods set and retrieve the mipmap mode for texture mapping for this
Texture object. The mipmap mode is either BASE_LEVEL or MULTI_LEVEL_MIP_
MAP.
Constructors
The Texture2D object has the following constructors.
public Texture2D()
This constructor is not very useful as the default width and height are 0.
public Texture2D(int mipmapMode, int format, int width, int height)
Constructs and initializes a Texture2D object with the specified attributes. The
mipmapMode parameter is either BASE_LEVEL or MULTI_LEVEL_MIPMAP. The for-
mat parameter is one of the following: INTENSITY, LUMINANCE, ALPHA,
LUMINANCE_ALPHA, RGB, or RGBA.
Methods
The first method creates a new Texture2D object; this method is called from a
leaf node’s duplicateNode method. The second method copies the information
found in originalNode to the current node; this method is called as part of the
cloneTree operation.
Constructors
The Texture3D object has the following constructors.
public Texture3D()
This constructor is not very useful as the default width, height, and depth are 0.
Constructs and initializes a Texture3D object using the specified attributes. The
mipmapMode parameter is either BASE_LEVEL or MULTI_LEVEL_MIPMAP. The for-
mat parameter is one of INTENSITY, LUMINANCE, ALPHA, LUMINANCE_ALPHA, RGB,
or RGBA. The default value for a Texture3D object is as follows:
• boundaryModeR: WRAP
Methods
The Texture3D object has the following methods.
This parameter specifies the boundary mode for the R coordinate in this Texture
object. The boundary mode is as follows:
• CLAMP: Clamps texture coordinates to be in the range [0, 1]. A constant
boundary color is used for R values that fall outside this range.
• WRAP: Repeats the texture by wrapping texture coordinates that are
outside the range [0, 1]. Only the fractional portion of the texture
coordinates is used; the integer portion is discarded.
The first method creates a new Texture3D object; this method is called from a
leaf node’s duplicateNode method. The second method copies the information
found in originalNode to the current node; this method is called as part of the
cloneTree operation.
Constants
The TexCoordGeneration object specifies the following variables.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that read, and in some cases write, its individual compo-
nent field information.
Constructors
The TexGen object has the following constructors.
public TexCoordGeneration()
public TexCoordGeneration(int genMode, int format)
public TexCoordGeneration(int genMode, int format,
Vector4f planeS)
public TexCoordGeneration(int genMode, int format,
Vector4f planeS, Vector4f planeT)
public TexCoordGeneration(int genMode, int format,
Vector4f planeS, Vector4f planeT, Vector4f planeR)
The first form constructs a TexGen object using default values for all state vari-
ables. The other forms construct a TexGen object by initializing the specified
fields. Default values are used for those state variables not specified in the con-
structor. The parameters are as follows:
Methods
The TexGen object has the following methods.
This parameter enables or disables texture coordinate generation for this Appear-
ance component object. The value is true if texture coordinate generation is
enabled, false if texture coordinate generation is disabled.
This parameter specifies the format, or dimension, of the generated texture coor-
dinates. The format value is either TEXTURE_COORDINATE_2 or TEXTURE_
COORDINATE_3.
This parameter specifies the texture coordinate generation mode. The value is
one of OBJECT_LINEAR, EYE_LINEAR, or SPHERE_MAP.
This parameter specifies the S coordinate plane equation. This plane equation is
used to generate the S coordinate in OBJECT_LINEAR and EYE_LINEAR texture
generation modes.
This parameter specifies the T coordinate plane equation. This plane equation is
used to generate the T coordinate in OBJECT_LINEAR and EYE_LINEAR texture
generation modes.
This parameter specifies the R coordinate plane equation. This plane equation is
used to generate the R coordinate in OBJECT_LINEAR and EYE_LINEAR texture
generation modes.
The first method creates a new TexCoordGeneration object; this method is called
from a leaf node’s duplicateNode method. The second method copies the infor-
mation found in originalNode to the current node; this method is called as part
of the cloneTree operation.
Constants
The MediaContainer object has the following flags.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that read or write its cached flag and its URL string.
Constructors
The MediaContainer object has the following constructors.
public MediaContainer()
Constructs and initializes a new MediaContainer object using the specified path
and forcing the cache data flag to true.
Methods
The Sound object has the following methods.
This parameter specifies the string path (URL) of the sound data associated with
this component.
attributes include gain scale factor, atmospheric rolloff, and parameters control-
ling reverberation, distance frequency filtering, and velocity-activated Doppler
effect.
7.1.15.1 Reverberation
Within Java 3D’s simple model for auralization, there are three components to
sound reverberation for a particular listening space:
• Delay time: Approximates the time from the start of a sound until it
reaches the listener after reflecting once off the surfaces in the region.
• Reflection coefficient: Attenuates the reverberated sound uniformly (for
all frequencies) as it bounces off surfaces.
• Feedback loop: Controls the maximum number of times a sound is
reflected off the surfaces.
None of these parameters are affected by sound position. Figure 7-2 shows the
interaction of these parameters.
Reflection
SoundSource Delay Coefficient
Feedback Loop
The reflection coefficient for reverberation is a single scale factor used to approx-
imate the overall reflective or absorptive characteristics of the surfaces in a rever-
beration region in which the listener is located. This scale factor is applied to the
sound’s amplitude regardless of the sound’s position. A value of 1.0 represents
complete (unattenuated) sound reflection, while a value of 0.0 represents full
absorption (reverberation is disabled).
The reverberation delay time is set either explicitly (in milliseconds), or implic-
itly by supplying an additional bounds volume (so the delay time can be calcu-
lated). The bounds of the reverberation space do not have to be the same as the
application region of the Soundscape node using this object.
The reverberation order defines the number of reverberation (feedback) loop iter-
ations to be executed while a sound is played. As long as the reflection coeffi-
cient is small enough, the reverberated sound decreases (as it would naturally)
Constants
The AuralAttributes object has the following flags.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that read or write the associated parameters.
Constructors
The AuralAttributes object has the following constructors.
public AuralAttributes()
Construct and initialize a new AuralAttributes object using the specified parame-
ters.
Methods
The AuralAttributes object has the following methods.
This parameter specifies an amplitude scale factor applied to the sound. Valid
values are ≥ 0.0.
This scale factor is used to model simple atmospheric conditions that affect the
speed of sound. This affects the time a sound takes to reach the listener after it
has begun playing. The normal speed of sound is scaled by this single rolloff
scale factor, thus increasing or decreasing the usual attenuation. Valid values are
≥ 0.0. Values > 1.0 increase the speed of sound, while values < 1.0 decrease its
speed.
This parameter specifies an average amplitude scale factor for all sound waves
(independent of their frequencies) as they reflect off all surfaces within the acti-
vation region in which the listener is located. There is currently no method to
assign different reflective audio properties to individual surfaces. The range of
values is 0.0 to 1.0. A value of 0.0 represents a fully absorptive surface (no sound
waves reflect off), while a value of 1.0 represents a fully reflective surface
(amplitudes of sound waves reflecting off surfaces are not decreased).
This parameter specifies the delay time between each order of reflection while
reverberation is being rendered. In the first form of setReverbDelay, an explicit
delay time is given in milliseconds. In the second form, a reverberation bounds
volume is specified, and then the delay time is calculated, becoming the new
reverb time delay. A value of 0.0 for delay time disables reverberation.
This parameter specifies the maximum number of times reflections will be added
to the reverberation being calculated. When the amplitude of the n-th reflection
reaches effective zero, no further reverberations need be added to the sound
image. A value of 0 disables reverberation. A value of −1 specifies that the rever-
beration calculations will loop indefinitely, until the n-th reflection term reaches
effective zero.
This parameter specifies a (distance, filter) attenuation pairs array. If this is not
set, no distance filtering is performed (equivalent to using a distance filter of
Sound.NO_FILTER for all distances). Currently, this filter is a low-pass cutoff fre-
quency. This array of pairs defines a piecewise linear slope for a range of values.
This attenuation array is similar to the PointSound node’s distanceAttenuation
pair array, except that frequency values are paired with distances in this list.
Using these pairs, distance-based low-pass frequency filtering can be applied
during sound rendering. Distances, specified in the local coordinate system in
meters, must be > 0. Frequencies (in Hz) must be > 0.
If the distance from the listener to the sound source is less than the first distance
in the array, the first filter is applied to the sound source. This creates a spherical
region around the listener within which a sound is uniformly attenuated by the
first filter in the array. If the distance from the listener to the sound source is
greater than the last distance in the array, the last filter is applied to the sound
source.
The first form of setDistanceFilter takes these pairs of values as an array of
Point2f. The second form accepts two separate arrays for these values. The dis-
tance and frequencyCutoff arrays should be of the same length. If the fre-
quencyCutoff array length is greater than the distance array length, the
frequencyCutoff array elements beyond the length of the distance array are
ignored. If the frequencyCutoff array is shorter than the distance array, the
last frequencyCutoff array value is repeated to fill an array of length equal to
the distance array.
The getDistanceFilterLength method returns the length of the distance filter
arrays. Arrays passed into getDistanceFilter methods should all be at least
this size.
There are two methods for getDistanceFilter, one returning an array of points,
the other returning separate arrays for each attenuation component.
Distance elements in this array of pairs are a monotonically increasing set of
floating-point numbers measured from the location of the sound source. Fre-
quency cutoff elements in this list of pairs can be any positive float. While for
most applications this list of values will usually be monotonically decreasing,
they do not have to be.
This parameter specifies a scale factor is used to increase or decrease the change
of frequency resulting from the Doppler effect calculated during sound render-
ing. This allows the application to exaggerate or reduce the change in frequency
normally resulting from applying the standard Doppler equation to the sound.
Valid values are ≥ 0.0. A value of 0.0 disables any Doppler calculation.
This parameter specifies a scale factor applied to the relative velocity (change in
distance in the local coordinate system between the sound source and the listener
over time) automatically calculated by the Doppler equation during sound ren-
dering. This allows the application to exaggerate or reduce the relative velocity
calculated by the standard Doppler equation. Valid values are ≥ 0.0. A value of
0.0 disables any Doppler calculation.
Constants
The ImageComponent object has the following flags:
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that read the associated parameters.
The ImageComponent object specifies the following variables, used to define 2D
or 3D ImageComponent classes. These variables specify the format of the pixel
data.
Specifies that each pixel contains three eight-bit channels, one each for red,
green, and blue. This is the same as FORMAT_RGB8.
Specifies that each pixel contains four eight-bit channels, one each for red, green,
blue, and alpha. This is the same as FORMAT_RGBA8.
Specifies that each pixel contains three eight-bit channels, one each for red,
green, and blue. This is the same as FORMAT_RGB.
Specifies that each pixel contains four eight-bit channels, one each for red, green,
blue, and alpha. This is the same as FORMAT_RGBA.
Specifies that each pixel contains three five-bit channels, one each for red, green,
and blue.
Specifies that each pixel contains three five-bit channels, one each for red, green,
and blue, and a one-bit channel for alpha.
Specifies that each pixel contains three four-bit channels, one each for red, green,
and blue.
Specifies that each pixel contains four four-bit channels, one each for red, green,
blue, and alpha.
Specifies that each pixel contains two four-bit channels, one each for luminance
and alpha.
Specifies that each pixel contains two eight-bit channels, one each for luminance
and alpha.
Specifies that each pixel contains two three-bit channels, one each for red and
green, and a two-bit channel for blue.
Specifies that each pixel contains one eight-bit channel. The channel can be used
for only luminance, alpha, or intensity.
Constructors
The ImageComponent object defines the following constructor.
Methods
The ImageComponent object defines the following methods.
These methods retrieve the width, height, and format of this image component
object.
Constructors
The ImageComponent2D object defines the following constructors.
Methods
The ImageComponent2D object defines the following methods.
This method copies the specified buffered image to this 2D image component
object.
Note: The image must be completely loaded before calling this function.
Constructors
The ImageComponent3D object defines the following constructors.
Methods
The ImageComponent3D object defines the following methods.
The first method copies the specified array of BufferedImage objects to this 3D
image component object. The second method copies the specified BufferedImage
object to this 3D image component object at the specified index.
Constants
The DepthComponent object has the following flags:
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that read the associated parameters.
Methods
Constructors
The DepthComponentFloat object defines the following constructors.
Constructs a new floating-point depth (Z-buffer) component object with the spec-
ified width and height.
Methods
These methods set and retrieve the specified depth data for this object.
Constructors
The DepthComponentInt object defines the following constructor.
Constructs a new integer depth (Z-buffer) component object with the specified
width and height.
Methods
These methods set and retrieve the specified depth data for this object.
Constructors
The DepthComponentNative object defines the following constructor.
Constructs a new native depth (Z-buffer) component object with the specified
width and height.
Methods
The Bounds object defines the following methods.
This method sets the value of this Bounds object to enclose the specified bound-
ing object.
These methods test for the intersection of this Bounds object with a ray, a point,
another Bounds object, or an array of Bounds objects, respectively.
This method finds the closest bounding object that intersects this bounding
object.
These methods combine this Bounds object with a bounding object, an array of
bounding objects, a point, or an array of points, respectively.
The first method tranforms a Bounds object so that it bounds a volume that is the
result of transforming the given bounding object by the given transform. The sec-
ond method transforms the Bounds object by the given transform.
This method tests whether the bounds is empty. A bounds is empty if it is null
(either by construction or as the result of a null intersection) or if its volume is
negative. A bounds with a volume of zero is not empty.
Constructors
The BoundingBox object defines the following constructors.
public BoundingBox()
public BoundingBox(Point3d lower, Point3d upper)
public BoundingBox(Bounds boundsObject)
public BoundingBox(Bounds bounds[])
The first constructor constructs and initializes a 2X unity BoundingBox about the
origin. The second constructor constructs and initializes a BoundingBox from the
given minimum and maximum in x, y, and z. The third constructor constructs and
initializes a BoundingBox from a bounding object. The fourth constructor con-
structs and initializes a BoundingBox from an array of bounding objects.
Methods
The BoundingBox object defines the following methods.
Sets the value of this bounding region to enclose the specified bounding object.
These methods combine this bounding box with a bounding object, an array of
bounding objects, a point, or an array of points, respectively.
The first method transforms a bounding box so that it bounds a volume that is the
result of transforming the given bounding object by the given transform. The sec-
ond method transforms the bounding box by the given transform.
These methods test for the intersection of this bounding box with a ray, a point,
another Bounds object, and an array of Bounds objects, respectively.
These methods compute a new BoundingBox that bounds the volume created by
the intersection of this BoundingBox with another Bounds object or array of
Bounds objects.
This method finds the closest bounding object that intersects this bounding box.
This method tests whether the bounding box is empty. A bounding box is empty
if it is null (either by construction or as the result of a null intersection) or if its
volume is negative. A bounding box with a volume of zero is not empty.
Constructors
The BoundingSphere object defines the following constructors.
public BoundingSphere()
public BoundingSphere(Point3D center, double radius)
public BoundingSphere(Bounds boundsObject)
public BoundingSphere(Bounds boundsObjects[])
Methods
The BoundingSphere object defines the following methods.
Sets the value of this bounding sphere to enclose the volume specified by the
Bounds object.
These methods combine this bounding sphere with a bounding object, an array
of bounding objects, a point, or an array of points, respectively.
These methods test for the intersection of this bounding sphere with the given
ray, point, another Bounds object, or an array of Bounds objects.
These methods compute a new BoundingSphere that bounds the volume created
by the intersection of this BoundingSphere with another Bounds object or array
of Bounds objects.
This method finds the closest bounding object that intersects this bounding
sphere.
The first method transforms a bounding sphere so that it bounds a volume that is
the result of transforming the given bounding object by the given transform. The
second method transforms the bounding sphere by the given transform. Note that
when transforming a bounding sphere by a transformation matrix containing a
nonuniform scale or a shear, the result is a bounding sphere with a radius equal
to the maximal scale in any direction—the bounding sphere does not transform
into an ellipsoid.
This method tests whether the bounding sphere is empty. A bounding sphere is
empty if it is null (either by construction or as the result of a null intersection)
or if its volume is negative. A bounding sphere with a volume of zero is not
empty.
Constructors
The BoundingPolytope object defines the following constructors.
public BoundingPolytope()
public BoundingPolytope(Vector4d planes[])
public BoundingPolytope(Bounds boundsObject)
public BoundingPolytope(Bounds boundsObjects[])
Methods
The BoundingPolytope object defines the following methods.
These methods set and retrieve the bounding planes for this BoundingPolytope
object.
This method returns the number of bounding planes for this bounding polytope.
This method sets the planes for this BoundingPolytope by keeping its current
number and direction of the planes and computing new plane positions to
enclose the given Bounds object.
The first method tranforms a bounding polytope so that it bounds a volume that
is the result of transforming the given bounding object by the given transform.
The second method transforms the bounding polytope by the given transform.
These methods test for the intersection of this BoundingPolytope with the given
ray, point, another Bounds object, or array of Bounds objects, respectively.
These methods compute a new BoundingPolytope that bounds the volume cre-
ated by the intersection of this BoundingPolytope with another Bounds object or
array of Bounds objects.
This method finds the closest bounding object that intersects this bounding poly-
tope.
This method tests whether the bounding polytope is empty. A bounding polytope
is empty if it is null (either by construction or as the result of a null intersection)
or if its volume is negative. A bounding polytope with a volume of zero is not
empty.
Constants
Constructors
The Transform3D object defines the following constructors.
public Transform3D()
This constructs and initializes a new Transform3D object to the identity transfor-
mation.
This constructs and initializes a new Transform3D object from the specified
transform.
These construct and initialize a new Transform3D object from the rotation
matrix, translation, and scale values. The scale is applied only to the rotational
component of the matrix (upper 3 × 3) and not to the translational components of
the matrix.
These construct and initialize a new Transform3D object from the 4 × 4 matrix.
The type of the constructed transform is classified automatically.
These construct and initialize a new Transform3D object from the array of length
16. The top row of the matrix is initialized to the first four elements of the array,
and so on. The type of the constructed transform is classified automatically.
These construct and initialize a new Transform3D object from the quaternion q1,
the translation t1, and the scale s. The scale is applied only to the rotational
components of the matrix (the upper 3 × 3) and not to the translational compo-
nents of the matrix.
This constructs and initializes a new Transform3D object and initializes it to the
upper 4 × 4 of the specified GMatrix. If the specified matrix is smaller than
4 × 4, the remaining elements in the transformation matrix are assigned to zero.
Methods
The Transform3D object defines the following methods.
This method retrieves the type of this matrix. The type is an ORed bitmask of all
of the type classifications to which it belongs.
This method retrieves the least general type of this matrix. The order of general-
ity from least to most is as follows: ZERO, IDENTITY, SCALE, TRANSLATION,
ORTHOGONAL, RIGID, CONGRUENT, and AFFINE. If the matrix is ORTHOGONAL, call-
ing the method getDeterminantSign will yield more information.
This method returns the sign of the determinant of this matrix. A return value of
true indicates a positive determinant. A return value of false indicates a nega-
tive determinant. In general, an orthogonal matrix with a positive determinant is
a pure rotation matrix; an orthogonal matrix with a negative determinant is both
a rotation and a reflection matrix.
This method sets the rotational component (upper 3 × 3) of this transform to the
rotation matrix converted from the Euler angles provided. The euler parameter
is a Vector3d consisting of roll, pitch, and yaw.
These methods set the rotational component (upper 3 × 3) of this transform to the
values in the specified matrix; the other elements of this transform are
unchanged. A singular value decomposition is performed on this object’s upper
3 × 3 matrix to factor out the scale, then this object’s upper 3 × 3 matrix compo-
nents are replaced by the input rotational components, and finally the scale is
reapplied to the rotational components.
These methods set the rotational component (upper 3 × 3) of this transform to the
appropriate values derived from the specified quaternion; the other elements of
this transform are unchanged. A singular value decomposition is performed on
this object’s upper 3 × 3 matrix to factor out the scale, then this object’s upper
3 × 3 matrix components are replaced by the matrix equivalent of the quaternion,
and finally the scale is reapplied to the rotational components.
These methods set the rotational component (upper 3 × 3) of this transform to the
appropriate values derived from the specified axis-angle; the other elements of
this transform are unchanged. A singular value decomposition is performed on
this object’s upper 3 × 3 matrix to factor out the scale, then this object's upper
3 × 3 matrix components are replaced by the matrix equivalent of the axis-angle,
and finally the scale is reapplied to the rotational components.
The set method sets the scale component of this transform by factoring out the
current scale from the rotational component and multiplying by the new scale.
The get method performs an SVD normalization of this transform to calculate
and return the scale factor; this transform is not modified.
The set method sets the possibly non-uniform scale component to the current
transform. Any existing scale is first factored out of the existing transform before
the new scale is applied. The get method returns the possibly non-uniform scale
components of the current transform and places them into the scale vector.
The first method scales transform t1 by a uniform scale matrix with scale factor
s, then adds transform t2 (this = S * t1 + t2). The second method scales this
transform by a uniform scale matrix with scale factor s, then adds transform t1
(this = S * this + t1).
The set methods replace the upper 3 × 3 matrix values of this transform with the
values in the matrix m1. The get methods retrieve the upper 3 × 3 matrix values
of this transform and place them in the matrix m1.
The first add method adds this transform to the transform t1 and places the result
back into this. The second add method adds the transforms t1 and t2 and
places the result into this. The first sub method subtracts transform t1 from this
transform and places the result back into this. The second sub method subtracts
transform t2 from t1 and places the result into this.
The first method adds a scalar to each component of this transform. The second
method adds a scalar to each component of the transform t1 and places the result
into this. Transform t1 is not modified.
The first method transposes this matrix in place. The second method transposes
transform t1 and places the value into this transform. The transform t1 is not
modified.
These three methods set the value of this matrix to a rotation matrix about the
specified axis. The angle to rotate is specified in radians.
This method modifies the translational components of this transform to the val-
ues of the argument. The other values of this transform are not modified.
These methods set the value of this transform to the matrix conversion of the
quaternion argument.
public final void set(Quat4d q1, Vector3d t1, double s)
public final void set(Quat4f q1, Vector3d t1, double s)
public final void set(Quat4f q1, Vector3f t1, float s)
These methods set the value of this matrix from the rotation expressed by the
quaternion q1, the translation t1, and the scale s.
These methods set the translational value of this matrix to the specified vector
parameter values and set the other components of the matrix as if this transform
were an identity matrix.
These methods set the value of this transform to a scale and translation matrix;
the translation is scaled by the scale factor and all of the matrix values are mod-
ified.
public final void set(Transform3D t1)
This method sets the matrix, type, and state of this transform to the matrix, type,
and state of the transform t1.
public final void set(double matrix[])
public final void set(float matrix[])
These methods set the matrix values of this transform to the specified matrix val-
ues.
The first method sets the value of this transform to a uniform scale; all of the
matrix values are modified. The next two methods set the value of this transform
to a scale and translation matrix; the scale is not applied to the translation and all
of the matrix values are modified.
public final void set(Matrix4d m1)
public final void set(Matrix4f m1)
These methods set the matrix values of this transform to the matrix values in the
specified matrix.
These methods set the rotational and scale components (upper 3 × 3) of this
transform to the matrix values in the specified matrix. The remaining matrix val-
ues are set to the identity matrix. All values of the matrix are modified.
These methods set the value of this matrix from the rotation expressed by the
rotation matrix m1, the translation t1, and the scale s. The scale is only applied to
the rotational component of the matrix (upper 3 × 3) and not to the translational
component of the matrix.
Version 1.1 Alpha 01, February 27, 1998 159
7.1.27 Transform3D Object NODE COMPONENT OBJECTS
These methods set the matrix values of this transform to the matrix values in the
specified matrix. The GMatrix object must specify a 4 × 4, 3 × 4, or 3 × 3 matrix.
These methods set the rotational component (upper 3 × 3) of this transform to the
matrix conversion of the specified axis-angle argument. The remaining matrix
values are set to the identity matrix. All values of the matrix are modified.
These methods place the values of this transform into the specified matrix of
length 16. The first four elements of the array will contain the top row of the
transform matrix, and so on.
public final void get(Matrix4d matrix)
public final void get(Matrix4f matrix)
These methods place the values of this transform into the matrix argument.
public final void get(Matrix3d m1)
public final void get(Matrix3f m1)
These methods place the normalized rotational component of this transform into
the 3 × 3 matrix argument.
These methods place the normalized rotational component of this transform into
the m1 parameter and the translational component into the t1 parameter.
These methods perform an SVD normalization of this matrix to acquire the nor-
malized rotational component. The values are placed into the quaternion q1
parameter.
The first method inverts this transform in place. The second method sets the
value of this transform to the inverse of the transform t1. Both of these methods
use the transform type to determine the optimal algorithm for inverting the trans-
form.
The first method sets the value of this transform to the result of multiplying itself
with transform t1 (this = this * t1). The second method sets the value of this
transform to the result of multiplying transform t1 by transform t2
(this = t1 * t2).
The first method multiplies this transform by the scalar constant. The second
method multiplies transform t1 by the scalar constant and places the value into
this transform.
The first method multiplies this transform by the inverse of transform t1 and
places the result into this transform (this = this * t1–1). The second method mul-
tiplies transform t1 by the inverse of transform t2 and places the result into this
transform (this = t1 * t2–1).
Version 1.1 Alpha 01, February 27, 1998 161
7.1.27 Transform3D Object NODE COMPONENT OBJECTS
Both of these methods use an SVD normalization. The first normalize method
normalizes the rotational components (upper 3 × 3) of matrix this and places
the results back into this. The second normalize method normalizes the rota-
tional components (upper 3 × 3) of transform t1 and places the result in this.
Both of these methods use a cross-product (CP) normalization. The first normal-
izeCP method normalizes the rotational components (upper 3 × 3) of this trans-
form and places the result into this transform. The second normalizeCP method
normalizes the rotational components (upper 3 × 3 of transform t1 and places the
result into this transform.
This method returns true if all of the data members of transform t1 are equal to
the corresponding data members in this transform.
This method returns true if the L∞ distance between this transform and trans-
form m1 is less than or equal to the epsilon parameter; otherwise, it returns
false. The L∞ distance is equal to:
This method returns a hash number based on the data values in this object. Two
different Transform3D objects with identical data values (that is, true is returned
for trans.equals(Transform3D)) will return the same hash number. Two
Transform3D objects with different data members may return the same hash
value, although this is not likely.
The first two methods transform the vector vec by this transform and place the
result into vecOut. The last two methods transform the vector vec by this trans-
form and place the result back into vec.
The first two methods transform the point parameter by this transform and place
the result into pointOut. The last two methods transform the point parameter
by this transform and place the result back into point. In both cases, the fourth
element of the point input parameter is assumed to be 1.
The first two methods transforms the normal parameter by this transform and
place the value into normalOut. The third and fourth methods transform the nor-
mal parameter by this transform and place the value back into normal.
This is a utility method that specifies the position and orientation of a viewing
transformation. It works very much like the similar function in OpenGL. The
inverse of this transform can be used to control the ViewPlatform object within
the scene graph. Alternatively, this transform can be passed directly to the View’s
VpcToEc transform via the compatibility mode viewing functions defined in
Section C.11.2, “Using the Camera-based View Model.”
Constants
The GeometryArray object defines the following flags.
SceneGraphObject
NodeComponent
Geometry
CompressedGeometry
Raster
Text3D
GeometryArray
GeometryStripArray
LineStripArray
TriangleStripArray
TriangleFanArray
LineArray
PointArray
QuadArray
TriangleArray
IndexedGeometryArray
IndexedGeometryStripArray
IndexedLineStripArray
IndexedTriangleStripArray
IndexedTriangleFanArray
IndexedLineArray
IndexedPointArray
IndexedQuadArray
IndexedTriangleArray
These flags specify that the GeometryArray object allows reading or writing of
the array of coordinates.
These flags specify that the GeometryArray object allows reading or writing of
the array of colors.
These flags specify that the GeometryArray object allows reading or writing of
the array of normals.
Version 1.1 Alpha 01, February 27, 1998 165
7.2.1 GeometryArray Object NODE COMPONENT OBJECTS
These flags specify that the GeometryArray object allows reading or writing of
the array of texture coordinates.
This flag specifies that the GeometryArray object allows reading any count data
(such as the vertex count) associated with the GeometryArray.
Constructors
The GeometryArray object has the following constructor.
Constructs an empty GeometryArray object with the specified vertex format and
number of vertices. The vertexCount parameter specifies the number of vertex
elements in this array. The vertexFormat parameter is a mask indicating which
vertex components are present in each vertex. The vertex format is specified as a
set of flags that are bitwise ORed together to describe the per-vertex data. The
following vertex formats are supported.
• COORDINATES: Specifies that this vertex array contains coordinates.
This bit must be set.
• NORMALS: Specifies that this vertex array contains normals.
• COLOR_3: Specifies that this vertex array contains colors without alpha.
Colors are specified as floating-point values in the range [0.0, 1.0].
• COLOR_4: Specifies that this vertex array contains colors with alpha.
Colors are specified as floating-point values in the range [0.0, 1.0]. This
takes precedence over COLOR_3.
• TEXTURE_COORDINATE_2: Specifies that this vertex array contains
2D texture coordinates (S and T).
• TEXTURE_COORDINATE_3: Specifies that this vertex array contains
3D texture coordinates (S, T, and R). This takes precedence over TEXTURE_
COORDINATE_2.
Methods
GeometryArray methods provide access (get and set methods) to individual
vertex component arrays in two different modes: as individual elements or as
arrays of multiple elements.
Sets or retrieves the coordinate associated with the vertex at the specified index.
The coordinate parameter is an array of three values containing the new coordi-
nate.
Sets or retrieves the coordinate associated with the vertex at the specified index.
The coordinate parameter is a point containing the new coordinate.
Sets or retrieves the coordinates associated with the vertices starting at the spec-
ified index. The coordinates parameter is an array of 3*n values containing n
new coordinates.
Sets or retrieves the coordinates associated with the vertices starting at the spec-
ified index. The coordinates parameter is an array of points containing new
coordinates.
These methods set the coordinates associated with the vertices starting at the
specified index for this object, using coordinate data starting from vertex index
start for length vertices.
Sets or retrieves the color associated with the vertex at the specified index. The
color parameter is an array of three or four values containing the new color.
Sets or retrieves the color associated with the vertex at the specified index. The
color parameter is an array containing the new color.
Sets or retrieves the colors associated with the vertices starting at the specified
index. The colors parameter is an array of 3*n or 4*n values containing n new
colors.
Sets or retrieves the colors associated with the vertices starting at the specified
index. The colors parameter is an array containing the new colors.
These methods set the colors associated with the vertices starting at the specified
index for this object, using data in colors starting at index start for length
colors.
Sets or retrieves the normal associated with the vertex at the specified index. The
normal parameter is an array of three values containing the new normal.
Sets or retrieves the normal associated with the vertex at the specified index. The
normal parameter is a vector containing the new normal.
Sets or retrieves the normals associated with the vertices starting at the specified
index. The normals parameter is an array of 3*n values containing n new nor-
mals.
Sets or retrieves the normals associated with the vertices starting at the specified
index. The normals parameter is an array of vectors containing new normals.
These methods set the normals associated with the vertices starting at the speci-
fied index for this object, using data in normals starting at index start and end-
ing at index start+length.
Sets or retrieves the texture coordinate associated with the vertex at the specified
index. The texCoord parameter is an array of two or three values containing the
new texture coordinate.
Sets or retrieves the texture coordinate associated with the vertex at the specified
index. The texCoord parameter is a point containing the new texture coordinate.
Sets or retrieves the texture coordinates associated with the vertices starting at
the specified index. The texCoords parameter is an array of 2*n or 3*n values
containing n new texture coordinates.
Sets or retrieves the texture coordinates associated with the vertices starting at
the specified index. The texCoords parameter is an array of points containing
the new texture coordinate.
These methods set the texture coordinates associated with the vertices starting at
the specified index for this object, using data in texCoords starting at index
start and ending at index start+length.
Constructors
Constructs an empty PointArray object with the specified vertex format and
number of vertices.
Constructors
Constructs an empty LineArray object with the specified vertex format and num-
ber of vertices.
Constructors
Constructs an empty TriangleArray object with the specified vertex format and
number of vertices.
Constructors
Constructs an empty QuadArray object with the specified vertex format and
number of vertices.
Constructors
The GeometryStripArray object has the following constructor.
Methods
The GeometryStripArray object has the following methods.
This method gets an array containing a list of vertex counts for each strip.
Constructors
angle strips. An array of per-strip vertex counts specifies where the separate
strips appear in the vertex array. For every strip in the set, each vertex, beginning
with the third vertex in the array, defines a triangle to be drawn using the current
vertex and the two previous vertices.
Constructors
Constructors
Constants
The IndexedGeometryArray object defines the following flags.
Constructors
The IndexedGeometryArray object has one constructor that accepts the same
parameters as GeometryArray.
Methods
IndexedGeometryArray methods provide access (get and set methods) to the
individual vertex component index arrays that are used when rendering the
geometry. This access is allowed in two different modes: as individual index ele-
ments or as arrays of multiple index elements.
Sets or retrieves the coordinate index associated with the vertex at the specified
index.
Sets or retrieves the coordinate indices associated with the vertices starting at the
specified index.
Sets or retrieves the color index associated with the vertex at the specified index.
Sets or retrieves the color indices associated with the vertices starting at the spec-
ified index.
Sets or retrieves the normal index associated with the vertex at the specified
index.
Sets or retrieves the normal indices associated with the vertices starting at the
specified index.
Sets or retrieves the texture coordinate index associated with the vertex at the
specified index.
Sets or retrieves the texture coordinate indices associated with the vertices start-
ing at the specified index.
Constructors
The IndexedPointArray object has the following constructor.
Constructors
The IndexedLineArray object has the following constructor.
Constructors
The IndexedTriangleArray object has the following constructor.
Constructors
The IndexedQuadArray object has the following constructor.
Constructors
The IndexedGeometryStripArray object has the following constructor.
Methods
The IndexedGeometryArrayStrip object has the following methods.
tex, beginning with the second vertex in the array, defines a line segment to be
drawn from the previous vertex to the current vertex.
Constructors
The IndexedLineStripArray object has the following constructor.
Constructors
The IndexedTriangleStripArray object has the following constructor.
Constructors
The IndexedTriangleFanArray object has the following constructor.
Constants
The CompressedGeometry object specifies the following variables.
These flags, when enabled using the setCapability method, allow an applica-
tion to invoke methods that read its individual component field information.
Constructors
Methods
Retrieves the header for this CompressedGeometry object. The header is copied
into the CompressedGeometryHeader object provided.
Constants
These flags indicate whether RGB or alpha color information is initialized in the
compressed geometry buffer.
These indicate the major, minor, and minor-minor version numbers for the com-
pressed geometry format that was used to compress the geometry.
This flag describes the type of data in the compressed geometry buffer. Only one
type may be present in any given compressed geometry buffer.
This flag indicates whether a particular data component (for example, color) is
present in the compressed geometry buffer, preceding any geometric data. If a
particular data type is not present then this information will be inherited from the
Appearance object.
These flags indicate the scale, size, and x, y, and z offsets that need to be applied
to every point in the compressed geometry buffer to restore the geometry to its
original (uncompressed) position.
Constructors
public CompressedGeometryHeader()
Constants
The Raster object defines the following flags.
These flags specify that the Raster object allows reading or writing of the posi-
tion, offset, image, depth component, size, or type.
Specifies a Raster object with color data. In this mode, the ImageComponent ref-
erence must point to a valid ImageComponent object.
Specifies a Raster object with depth (Z-buffer) data. In this mode, the depth com-
ponent reference must point to a valid DepthComponent object.
Specifies a Raster object with both color and depth (Z-buffer) data. In this mode,
the image component reference must point to a valid ImageComponent object,
and the depth component reference must point to a valid DepthComponent
object.
Constructors
public Raster()
public Raster(Point3f pos, int type, int xOffset, int yOffset,
int width, int height, ImageComponent2D image,
DepthComponent depthComponent)
public Raster(Point3f pos, int type, Point offset, Dimension size,
ImageComponent2D image, DepthComponent depthComponent)
Constructs and initializes a new Raster object. The first form uses default values.
The next two forms construct a new raster image with the specified values.
Methods
These methods set and retrieve the position, in object coordinates, of this raster.
This position is transformed into device coordinates and is used as the upper-left
corner of the raster.
These methods set and retrieve the type of this Raster object. The type is one of
the following: RASTER_COLOR, RASTER_DEPTH, or RASTER_COLOR_DEPTH.
These methods set and retrieve the offset within the array of pixels at which to
start copying.
These methods set and retrieve the number of pixels to be copied from the pixel
array.
These methods set and retrieve the pixel array used to copy pixels to or from a
Canvas3D. This is used when the type is RASTER_COLOR or RASTER_
COLOR_DEPTH.
These methods set and retrieve the DepthComponent used to copy pixels to or
from a Canvas3D. This is used when the type is RASTER_DEPTH or RASTER_
COLOR_DEPTH.
Constructors
Creates a Font3D object from the specified Font object. The FontExtrusion
object (see Section 7.2.23, “FontExtrusion Object”) contains the extrusion path
to use on the 2D font glyphs. To ensure correct rendering, the font must be cre-
ated with the default AffineTransform. The point size of a Font object is used as
a rough measure of how finely to tessellate the glyphs. A larger point size will, in
general, have finer detail than the same font with a smaller point size. Passing
null for the FontExtrusion parameter results in no extrusion being done.
Methods
This method returns the 3D bounding box of the specified glyph code.
This method returns the Java 2D font used to create this Font3D object.
This method retrieves the FontExtrusion object used to create this Font3D object,
and copies it into the specified parameter. For information about the FontExtru-
sion object, see Section 7.2.23, “FontExtrusion Object.”
Constructors
public FontExtrusion()
public FontExtrusion(Shape extrusionShape)
Methods
These methods set and retrieve the 2D Shape object associated with this FontEx-
trusion object. The Shape object describes the extrusion path used to create a 3D
glyph from a 2D glyph. The get method copies the shape from this object to the
given parameter. The set method copies the given shape into this FontExtrusion
object.
Constants
The Text3D object defines the following flags.
These flags control reading and writing of the Font3D component information
for Font3D, the String object, the text position value, the text alignment value,
the text path value, the character spacing, and the bounding box.
Constructors
public Text3D()
public Text3D(Font3D font3D)
public Text3D(Font3D font3D, String string)
public Text3D(Font3D font3D, String string, Point3f position)
public Text3D(Font3D font3D, String string, Point3f position,
int alignment, int path)
Create a new Text3D object. The first constructor creates the Text3D object with
no Font3D object associated with it, a null string, and all default values: a posi-
tion of (0.0, 0.0, 0.0), an alignment of ALIGN_FIRST, and a path of PATH_RIGHT.
The other constructors set the appropriate values to the passed-in parameters.
Methods
These methods get and set the Font3D object associated with this Text3D object.
These methods get and set the character string associated with this Text3D
object.
These methods get and set the text position. The position parameter is used to
determine the initial placement of the string. The text position is used in conjunc-
tion with the alignment and path to determine how the glyphs are to be placed in
the scene. The default value is (0.0, 0.0, 0.0).
These methods set and get the text alignment policy for this Text3D NodeCom-
ponent object (see Figure 7-4). The alignment parameter is used to specify how
glyphs in the string are placed in relation to the position field. Valid values for
the alignment field are:
• ALIGN_CENTER: places the center of the string on the position point.
• ALIGN_FIRST: places the first character of the string on the position
point.
• ALIGN_LAST: places the last character of the string on the position point.
The default value of this field is ALIGN_FIRST.
These methods set and get the node’s path field. This field is used to specify how
succeeding glyphs in the string are placed in relation to the previous glyph (see
Figure 7-4). The path is relative to the local coordinate system of the Text3D
node. The default coordinate system (see Section 3.4, “Coordinate Systems”) is
right-handed with +Y being up, +X horizontal to the right, and +Z directed
toward the viewer. Valid values for this field are as follows:
P .P D
U
.U O
. W
D
D .O .N
O W .
W N P
N U
= Text Position Point
Figure 7-4 Various Text Alignments and Paths
This method retrieves the 3D bounding box that encloses this Text3D object.
These methods set and get the character spacing used to construct the Text3D
string. This spacing is in addition to the regular spacing between glyphs as
defined in the Font object. A value of 1.0 in this space is measured as the width
of the largest glyph in the 2D font. The default value is 0.0.
JAVA 3D introduces a new view model that takes Java’s vision of “write once,
run anywhere” and generalizes it to include display devices and
six-degrees-of-freedom input peripherals such as head trackers. This “write once,
view everywhere” nature of the new view model means that an application or
applet written using the Java 3D view model can render images to a broad range
of display devices, including standard computer displays, multiple-projection
display rooms, and head-mounted displays, without modification of the scene
graph. It also means that the same application, once again without modification,
can render stereoscopic views and can take advantage of the input from a head
tracker to control the rendered view.
Java 3D’s view model achieves this versatility by cleanly separating the virtual
and the physical world. This model distinguishes between how an application
positions, orients, and scales a ViewPlatform object (a viewpoint) within the vir-
tual world and how the Java 3D renderer constructs the final view from that
viewpoint’s position and orientation. The application controls the ViewPlatform’s
position and orientation; the renderer computes what view to render using this
position and orientation, a description of the end-user’s physical environment,
and the user’s position and orientation within the physical environment.
This chapter first explains why Java 3D chose a different view model and some
of the philosophy behind that choice. It next describes how that model operates
in the simple case of a standard computer screen without head tracking—the
most common case. Finally, it presents the relevant parts of the API from a
developer’s perspective. Appendix C, “View Model Details,” describes the
Java 3D view model from an advanced developer and Java 3D implementor’s
perspective.
include the size of the physical display, how the display is mounted (on the user’s
head or on a table), whether the computer knows the user’s head location in three
space, the head mount’s actual field of view, the display’s pixels per inch, and
other such parameters. For more information, see Appendix C, “View Model
Details.”
any) defines the local physical world coordinate system known to a particular
instance of Java 3D.
Virtual Universe
Hi-Res Locale
BG
Physical Physical
Body Environment
Figure 8-1 View Object, Its Component Objects, and Their Interconnection
The view-related objects shown in Figure 8-1 and their roles are as follows. For
each of these objects, the portion of the API that relates to modifying the virtual
world and the portion of the API that is relevant to non-head-tracked standard
display configurations are derived in this chapter. The remainder of the details
are described in Appendix C, “View Model Details.”
• ViewPlatform: A leaf node that locates a view within a scene graph. The
ViewPlatform’s parents specify its location, orientation, and scale within
Virtual Universe
Hi-Res Locale
BranchGroup BG
TransformGroup TG
Physical Physical
Body Environment
object about its center point. In that figure, the Behavior object modifies the
TransformGroup directly above the Shape3D node.
An alternative application scene graph, shown in Figure 8-3, leaves the central
object alone and moves the ViewPlatform around the world. If the shape node
contains a model of the earth, this application could generate a view similar to
that seen by astronauts as they orbit the earth.
Had we populated this world with more objects, this scene graph would allow
navigation through the world via the Behavior node.
Virtual Universe
Locale Object
BG BG BranchGroup Nodes
Behavior Node B
T T TransformGroup Nodes
User Code
Shape3D Node and Data
S VP View
ViewPlatform Object
Methods
These methods set and retrieve the coexistence center in virtual world policy. A
ViewPlatform’s view attach policy determines how Java 3D places the virtual
eyepoint within the ViewPlatform. The policy can have one of the following val-
ues:
• NOMINAL_HEAD: Ensures that the end-user’s nominal eye position in
the physical world corresponds to the virtual eye’s nominal eye position in
the virtual world (the ViewPlatform’s origin). In essence, this policy tells
Java 3D to position the virtual eyepoint relative to the ViewPlatform origin
in the same way as the physical eyepoint is positioned relative to its nom-
inal physical-world origin. Deviations in the physical eye’s position and
orientation from nominal in the physical world generate corresponding de-
viations of the virtual eye’s position and orientation in the virtual world.
• NOMINAL_FEET: Ensures that the end-user’s virtual feet always touch
the virtual ground. This policy tells Java 3D to compute the physi-
cal-to-virtual-world correspondence in a way that enforces this constraint.
Java 3D does so by appropriately offsetting the physical eye’s position by
the end-user’s physical height. Java 3D uses the nominalEyeHeightFrom-
Ground parameter found in the PhysicalBody object (see Section 8.10,
“The PhysicalBody Object”) to perform this computation.
• NOMINAL_SCREEN: Allows an application to always have the virtual
eyepoint appear at some “viewable” distance from a point of interest. This
policy tells Java 3D to compute the physical-to-virtual-world correspon-
dence in a way that ensures that the renderer moves the nominal virtual
eyepoint away from the point of interest by the amount specified by the
nominalEyeOffsetFromNominalScreen parameter found in the Physical-
Body object (see Section 8.10, “The PhysicalBody Object”).
• NOMINAL_SCREEN_SCALED: This value is deprecated. All view at-
tach policies are now affected by the screen scale so this policy is identical
to NOMINAL_SCREEN, which should be used instead.
ViewPlatform attached to the current View object. The eye and projection matri-
ces are constructed from the View object and its associated component objects.
In our scene graph, what we would normally consider the model transformation
would consist of the following three transformations: LT1T2. By multiplying
LT1T2 by a vertex in the shape object, we would transform that vertex into the
virtual universe’s coordinate system. What we would normally consider the view
platform transformation would be (LTv1)–1 or Tv1–1L–1. This presents a problem
since coordinates in the virtual universe are 256-bit fixed-point values, which
cannot be used to efficiently represent transformed points.
Fortunately, however, there is a solution to this problem. Composing the model
and view platform transformations gives us
Tv1–1L–1LT1T2 = Tv1–1IT1T2 = Tv1–1T1T2,
the matrix that takes vertices in an object’s local coordinate system and places
them in the ViewPlatform’s coordinate system. Note that the high-resolution
Locale transformations cancel each other out, which removes the need to actually
transform points into high-resolution VirtualUniverse coordinates. The general
formula of the matrix that transforms object coordinates to ViewPlatform coordi-
nates is Tvn–1…Tv2–1Tv1–1T1T2…Tm.
As was mentioned above, the View object contains the remainder of the view
information, specifically, the eye matrix, E, that takes points in the
ViewPlatform’s local coordinate system and translates them into the user’s eye
coordinate system, and the projection matrix, P, that projects objects in the eye’s
coordinate system into clipping coordinates. The final concatenation of matrices
for rendering our shape object “S” on the specified Canvas3D is PETv1–1T1T2. In
general this is PETvn–1…Tv2–1Tv1–1T1T2…Tm.
The details of how Java 3D constructs the matrices E and P in different end-user
configurations are described in Appendix C, “View Model Details.”
Virtual Universe
L Hi-Res Locale
BG
T1 Tv1
Physical Physical
Body Environment
Constructors
The View object specifies the following constructor.
public View()
Methods
The View object specifies the following methods.
These methods set and retrieve the View’s PhysicalBody object. See
Section 8.10, “The PhysicalBody Object,” for more information on the Physical-
Body object.
These methods set and retrieve the View’s PhysicalEnvironment object. See
Section 8.11, “The PhysicalEnvironment Object,” for more information on the
PhysicalEnvironment object.
This method attaches a ViewPlatform leaf node to this View, replacing the exist-
ing ViewPlatform. If the ViewPlatform is part of a live scene graph, or is subse-
quently made live, the scene graph is rendered into all canvases in this View
object’s list of Canvas3D objects. To remove a ViewPlatform without attaching a
new one—causing the View to no longer be rendered—a null reference may be
passed to this method. In this case, the behavior is as if rendering were simulta-
neously stopped on all canvases attached to the View—the last frame that was
rendered in each remains visible until the View is again attached to a live
ViewPlatform object. See Section 5.10, “ViewPlatform Node,” for more informa-
tion on ViewPlatform objects.
These methods set, retrieve, add to, insert after, and remove a Canvas3D object
from this View. The index specifies the reference to the Canvas3D object within
the View object. See Section 8.9, “The Canvas3D Object” for more information
on Canvas3D objects.
These methods allow the introduction of new input devices into a Java 3D envi-
ronment and the retrieval of all the input devices available within a Java 3D envi-
ronment. See Section 10.1, “InputDevice Interface” for more information on
input devices.
These methods allow the introduction of new audio devices into a Java 3D envi-
ronment and the retrieval of all the audio devices available within a Java 3D
environment. See Section 11.1, “AudioDevice Interface,” for more information
on audio devices.
Methods
These two methods set and retrieve the current projection policy for this view.
The projection policies are as follows:
These methods set and retrieve the local eye lighting flag, which indicates
whether the local eyepoint is used in lighting calculations for perspective projec-
tions. If this flag is set to true, the view vector is calculated per vertex based on
the direction from the actual eyepoint to the vertex. If this flag is set to false, a
single view vector is computed from the eyepoint to the center of the view frus-
tum. This is called infinite eye lighting. Local eye lighting is disabled by default,
and is ignored for parallel projections.
Constants
This variable specifies the policy for resizing and moving windows. This policy
is used in specifying windowResizePolicy and windowMovementPolicy. This
variable specifies that the specified action takes place only in the physical world.
This variable specifies that Java 3D applies the associated policy in the virtual
world.
Methods
This variable specifies how Java 3D modifies the view when a user resizes a win-
dow. A value of PHYSICAL_WORLD states that Java 3D will treat window resizing
operations as only happening in the physical world. This implies that rendered
objects continue to fill the same percentage of the newly sized window, using
more or less pixels to draw those objects, depending on whether the window
grew or shrank in size. A value of VIRTUAL_WORLD states that Java 3D will treat
window resizing operations as also happening in the virtual world whenever a
resizing occurs in the physical world. This implies that rendered objects remain
the same size (use the same number of pixels), but since the window becomes
larger or smaller, the user sees more or less of the virtual world. The default
value is PHYSICAL_WORLD.
This variable specifies what part of the virtual world Java 3D will draw as a func-
tion of the window location on the display screen. A value of PHYSICAL_WORLD
states that the window acts as if it moves only on the physical screen. As the user
moves the window on the screen, the window’s position on the screen changes
but Java 3D continues to draw exactly the same image within that window. A
value of VIRTUAL_WORLD states that the window acts as if it also moves within the
virtual world. As the user moves the window on the physical screen, the win-
dow’s position on the screen changes and the image that Java 3D draws changes
as well to match what would be visible in the virtual world from a window in
that new position. The default value is PHYSICAL_WORLD.
Methods
The front clip policy determines where Java 3D places the front clipping plane.
The value is one of the following: PHYSICAL_EYE, PHYSICAL_SCREEN, VIRTUAL_
EYE, or VIRTUAL_SCREEN. The default value is PHYSICAL_EYE.
The back clip policy determines where Java 3D places the back clipping plane.
The value is one of the following: PHYSICAL_EYE, PHYSICAL_SCREEN, VIRTUAL_
EYE, or VIRTUAL_SCREEN. The default value is PHYSICAL_EYE.
In the default non-head-tracked mode, this value specifies the view model’s hori-
zontal field of view in radians. This value is ignored when the view model is
operating in head-tracked mode, or when the Canvas3D’s window eyepoint pol-
icy is set to a value other than the default setting of RELATIVE_TO_FIELD_OF_
VIEW (see Section C.5.3, “Window Eyepoint Policy”).
This value specifies the distance away from the clip origin, specified by the front
clip policy variable, in the direction of gaze where objects stop disappearing.
Objects closer than the clip origin (eye or screen) plus the front clip distance are
not drawn. Measurements are done in the space (physical or virtual) that is spec-
ified by the associated front clip policy parameter.
This value specifies the distance away from the clip origin (specified by the back
clip policy variable) in the direction of gaze where objects begin disappearing.
Objects farther away from the clip origin (eye or screen) plus the back clip dis-
tance are not drawn. Measurements are done in the space (physical or virtual)
that is specified by the associated back clip policy parameter. The View object’s
back clip distance is ignored if the scene graph contains an active Clip leaf node
(see Section 5.5, “Clip Node”).
This method returns the time at which the most recent rendering frame started. It
is defined as the number of milliseconds since January 1, 1970 00:00:00 GMT.
Since multiple canvases might be attached to this View, the start of a frame is
defined as the point just prior to clearing any canvas attached to this View.
This method returns the duration, in milliseconds, of the most recently completed
rendering frame. The time taken to render all canvases attached to this View is
measured. This duration is computed as the difference between the start of the
most recently completed frame and the end of that frame. Since multiple can-
vases might be attached to this View, the start of a frame is defined as the point
just prior to clearing any canvas attached to this View, while the end of a frame
is defined as the point just after swapping the buffer for all canvases.
This method returns the frame number for this view. The frame number starts at
0 and is incremented prior to clearing all the canvases attached to this view.
This method copies the last k frame start time values into the user-specified array.
The most recent frame start time is copied to location 0 of the array, the next
most-recent frame start time is copied into location 1 of the array, and so on. If
times.length is smaller that maxFrameStartTimes, only the last times.length
The first method stops the behavior scheduler after all currently-scheduled
behaviors are executed. Any frame-based behaviors scheduled to wake up on the
next frame will be executed at least once before the behavior scheduler is
stopped. The method returns a pair if integers that specify the beginning and end-
ing time (in milliseconds since January 1, 1970 00:00:00 GMT) of the behavior
scheduler’s last pass. The second method starts the behavior scheduler running
after it has been stopped. The third method retrieves a flag that indicates whether
the behavior scheduler is currently running.
The first method stops traversing this view after the current state of the scene
graph is reflected on all canvases attached to this view. The renderers associated
with these canvases are also stopped. The second method starts traversing this
view and starts the renderers associated with all canvases attached to this view.
The third method returns a flag indicating whether the traverser is currently run-
ning on this view.
Note: The above six methods are heavy-weight methods intended for verification
and image capture (recording). They are not intended to be used for flow control.
These methods set and retrieve the scene antialiasing flag. Scene antialiasing is
either enabled or disabled for this view. If enabled, the entire scene will be
Version 1.1 Alpha 01, February 27, 1998 213
8.7.7 Depth Buffer VIEW MODEL
The set method enables or disables automatic freezing of the depth buffer for
objects rendered during the transparent rendering pass (that is, objects rendered
using alpha blending) for this view. If enabled, depth buffer writes are disabled
during the transparent rendering pass regardless of the value of the
depth-buffer-write-enable flag in the RenderingAttributes object for a particular
node. This flag is enabled by default. The get method retrieves this flag.
Methods
These methods provide applications with information concerning the underlying
display hardware, such as the screen’s width and height in pixels or in meters.
This method retrieves the screen’s (image plate’s) width and height in pixels.
These methods retrieve the screen’s (image plate’s) physical width and height in
meters.
Constructors
The Canvas3D object specifies one constructor.
These methods, inherited from the parent Canvas class, retrieve the Canvas3D’s
screen position and size in pixels.
These methods set or retrieve the flag indicating whether this Canvas3D has ste-
reo enabled. If enabled, Java 3D generates left and right eye images. If the
Canvas3D’s StereoAvailable flag is false, Java 3D displays only the left eye’s
view even if an application sets StereoEnable to true. This parameter allows
applications to enable or disable stereo on a canvas-by-canvas basis.
This method specifies whether the underlying hardware supports double buffer-
ing on this canvas.
These methods set or retrieve the flag indicating whether this Canvas3D has dou-
ble buffering enabled. If disabled, all drawing is to the front buffer and no buffer
swap will be done between frames. It should be stressed that running Java 3D
with double buffering disabled is not recommended.
ify their own head’s characteristics, such as the location of the eyes and the inter-
pupilary distance. See Section C.8, “The PhysicalBody Object,” for details. The
default values are sufficient for applications that are running in a
non-head-tracked environment and that do not manually set the eyepoint.
Constructors
public PhysicalBody()
Constructors
public PhysicalEnvironment()
B EHAVIOR nodes provide the means for animating objects, processing key-
board and mouse inputs, reacting to movement, and enabling and processing pick
events. Behavior nodes contain Java code and state variables. A Behavior node’s
Java code can interact with Java objects, change node values within a Java 3D
scene graph, change the behavior’s internal state—in general, perform any com-
putation it wishes.
Simple behaviors can add surprisingly interesting effects to a scene graph. For
example, one can animate a rigid object by using a Behavior node to repetitively
modify the TransformGroup node that points to the object one wishes to animate.
Alternatively, a Behavior node can track the current position of a mouse and
modify portions of the scene graph in response.
The scheduling region defines a spatial volume that serves to enable the sched-
uling of Behavior nodes. A Behavior node is active (can receive stimuli) when-
ever a ViewPlatform’s activation volume intersects a Behavior object’s
scheduling region. Only active behaviors can receive stimuli.
The initialize method allows a Behavior object to initialize its internal state
and specify its initial wakeup condition(s). Java 3D invokes a behavior’s initial-
ize code when the behavior’s containing BranchGroup node is added to the vir-
tual universe. Java 3D does not invoke the initialize method in a new thread.
Version 1.1 Alpha 01, February 27, 1998 219
9.1.1 Code Structure BEHAVIORS AND INTERPOLATORS
Thus, for Java 3D to regain control, the initialize method must not execute an
infinite loop: It must return. Furthermore, a wakeup condition must be set or else
the behavior’s processStimulus method is never executed.
The processStimulus method receives and processes a behavior’s ongoing mes-
sages. The Java 3D behavior scheduler invokes a Behavior node’s processStim-
ulus method when a ViewPlatform’s activation volume intersects a Behavior
object’s scheduling region and all of that behavior’s wakeup criteria are satisfied.
The processStimulus method performs its computations and actions (possibly
including the registration of state change information that could cause Java 3D to
wake other Behavior objects), establishes its next wakeup condition, and finally
exits.
9.3 Scheduling
As a virtual universe grows large, Java 3D must carefully husband its resources
to ensure adequate performance. In a 10,000-object virtual universe with 400 or
so Behavior nodes, a naive implementation of Java 3D could easily end up con-
suming the majority of its compute cycles in executing the behaviors associated
with the 400 Behavior objects before it draws a frame. In such a situation, the
frame rate could easily drop to unacceptable levels.
Behavior objects are usually associated with geometric objects in the virtual uni-
verse. In our example of 400 Behavior objects scattered throughout a 10,000-
object virtual universe, only a few of these associated geometric objects would
be visible at a given time. A sizable fraction of the Behavior nodes—those asso-
ciated with nonvisible objects—need not be executed. Only those relatively few
Behavior objects that are associated with visible objects must be executed.
Java 3D mitigates the problem of a large number of Behavior nodes in a high-
population virtual universe through execution culling—choosing only to invoke
those behaviors that have high relevance.
Java 3D requires each behavior to have a scheduling region and to post a wakeup
condition. Together a behavior’s scheduling region and wakeup condition pro-
vide Java 3D’s behavior scheduler with sufficient domain knowledge to selec-
tively prune behavior invocations and only invoke those behaviors that absolutely
need to be executed.
Java 3D’s behavior scheduler executes those Behavior objects that have been
scheduled by calling the behavior’s processStimulus method.
Methods
The Behavior leaf node class defines the following methods.
This method, invoked by Java 3D’s behavior scheduler, is used to initialize the
behavior’s state variables and to establishes its WakeupConditions. Classes that
extend Behavior must provide their own initialize method.
This method processes stimuli destined for this behavior. The behavior scheduler
invokes this method if its WakeupCondition is satisfied. Classes that extend
Behavior must provide their own processStimulus method.
These two methods access or modify the Behavior node’s scheduling bounds.
This bounds is used as the scheduling region when the scheduling bounding leaf
is set to null. A behavior is scheduled for activation when its scheduling region
intersects the ViewPlatform’s activation volume (if its wakeup criteria have been
satisfied). The getSchedulingBounds method returns a copy of the associated
bounds.
These two methods access or modify the Behavior node’s scheduling bounding
leaf. When set to a value other than null, this bounding leaf overrides the sched-
uling bounds object and is used as the scheduling region.
This method defines this behavior’s wakeup criteria. This method may only be
called from a Behavior object’s initialize or processStimulus methods to
(re)arm the next wakeup. It should be the last thing done by those methods.
This method, when invoked by a behavior, informs the Java 3D scheduler of the
identified event. The scheduler will schedule other Behavior objects that have
registered interest in this posting.
This method copies all the node information from originalNode into the current
node. This method is called from the cloneTree method.
This is a callback method used to allow a node to check if any nodes referenced
by that node have been duplicated via a call to cloneTree. This method is called
by the cloneTree method after all nodes in the subgraph have been duplicated.
The cloned leaf node’s method will be called and the leaf node can then look up
any node references by using the getNewNodeReference method found in the
NodeReferenceTable object. If a match is found, a reference to the corresponding
node in the newly cloned subgraph is returned. If no corresponding reference is
found, either a DanglingReferenceException is thrown or a reference to the
original node is returned, depending on the value of the allowDanglingRefer-
ences parameter passed in the cloneTree call.
This method returns the primary view associated with this behavior. This method
is useful with certain types of behaviors, such as Billboard and LOD, that rely on
per-View information and with behaviors in general in regards to scheduling (the
distance from the view platform determines the active behaviors). The “primary”
view is defined to be the first View attached to a live ViewPlatform, if there is
more than one active View. So, for instance, Billboard behaviors would be ori-
ented toward this primary view, in the case of multiple active views into the same
scene graph.
Methods
The Java 3D API provides two methods for constructing WakeupCondition enu-
merations.
These two methods create enumerators that sequentially access this WakeupCon-
dition’s wakeup criteria. The first method creates an enumerator that sequentially
presents all wakeup criteria that were used to construct this WakeupCondition.
The second method creates an enumerator that sequentially presents only those
wakeup criteria that have been satisfied.
Methods
9.5.3.1 WakeupOnAWTEvent
This WakeupCriterion object specifies that Java 3D should awaken a behavior
when the specified AWT event occurs.
Constructors
Methods
This method returns the array of consecutive AWT events that triggered this
WakeupCriterion to awaken the Behavior object. The Behavior object can
retrieve the AWTEvent array and process it in any way it wishes.
9.5.3.2 WakeupOnActivation
The WakeupOnActivation object specifies a wakeup the first time the
ViewPlatform’s activation region intersects with this object’s scheduling region.
This gives the behavior an explicit means of executing code when it is activated.
Constructors
public WakeupOnActivation()
9.5.3.3 WakeupOnBehaviorPost
This WakeupCriterion object specifies that Java 3D should awaken this behavior
when the specified behavior posts the specified ID.
Constructors
awaken on any post from the specified behavior. Specifying a null behavior
implies that this behavior should awaken whenever any behavior posts the speci-
fied postId.
Methods
This method returns the postid that caused the behavior to wake up. If the postid
used to construct this wakeup criterion was not zero, the triggering postid will
always be equal to the postid used in the constructor.
This method returns the behavior that triggered this wakeup. If the arming behav-
ior used to construct this object was not null, the triggering behavior will be the
same as the arming behavior.
9.5.3.4 WakeupOnDeactivation
The WakeupOnDeactivation object specifies a wakeup on the first detection of a
ViewPlatform’s activation region no longer intersecting with this object’s sched-
uling region. This gives the behavior an explicit means of executing code when it
is deactivated.
Constructors
public WakeupOnDeactivation()
9.5.3.5 WakeupOnElapsedFrames
This WakeupCriterion object specifies that Java 3D should awaken this behavior
after it has rendered the specified number of frames. A value of 0 implies that
Java 3D will awaken this behavior at the next frame.
Constructors
Methods
This method returns the frame count used in creating this WakeupCriterion.
9.5.3.6 WakeupOnElapsedTime
This WakeupCriterion object specifies that Java 3D should awaken this behavior
after an elapsed number of milliseconds.
Constructors
Note: The Java 3D scheduler will schedule the object after the specified number
of milliseconds have elapsed, not before. However, the elapsed time may actually
be slightly greater than the time specified.
Methods
9.5.3.7 WakeupOnSensorEntry
This WakeupCriterion object specifies that Java 3D should awaken this behavior
when any sensor enters the specified region.
Note: There can be situations in which a sensor may enter and then exit an
armed region so rapidly that neither the Entry nor Exit condition is engaged.
Constructors
Methods
This method returns the Bounds object used in creating this WakeupCriterion.
9.5.3.8 WakeupOnSensorExit
This WakeupCriterion object specifies that Java 3D should awaken this behavior
when any sensor, already marked as within the region, is no longer in that region.
Note: This semantic guarantees that an Exit condition is engaged if its corre-
sponding Entry condition was engaged.
Constructors
Methods
This method returns the Bounds object used in creating this WakeupCriterion.
9.5.3.9 WakeupOnCollisionEntry
This WakeupCriterion object specifies that Java 3D should awaken the Wake-
upOnCollisionEntry behavior when the specified object collides with any other
object in the scene graph.
Constants
Constructors
Methods
These methods return the “collideable” path or bounds object used in specifying
the collision detection.
These methods return the path or bounds object that caused the collision.
9.5.3.10 WakeupOnCollisionExit
This WakeupCriterion object specifies that Java 3D should awaken the Wake-
upOnCollisionExit behavior when the specified object no longer collides with
any other object in the scene graph.
Constants
Constructors
Methods
These methods return the “collideable” path or bounds object used in specifying
the collision detection.
These methods return the path or bounds object that caused the collision.
9.5.3.11 WakeupOnCollisionMovement
This WakeupCriterion object specifies that Java 3D should awaken the Wake-
upOnCollisionMovement behavior when the specified object moves while in a
state of collision with any other object in the scene graph.
Constants
Constructors
Methods
These methods return the “collideable” path or bounds object used in specifying
the collision detection.
These methods return the path or bounds object that caused the collision.
9.5.3.12 WakeupOnViewPlatformEntry
This WakeupCriterion object specifies that Java 3D should awaken the Wake-
upOnViewPlatformEntry behavior when any ViewPlatform enters the specified
region.
Note: There can be situations in which a ViewPlatform may enter and then exit
an armed region so rapidly that neither the Entry nor Exit condition is engaged.
Constructors
Methods
This method returns the Bounds object used in creating this WakeupCriterion.
9.5.3.13 WakeupOnViewPlatformExit
This WakeupCriterion object specifies that Java 3D should awaken the Wake-
upOnViewPlatformExit behavior when any ViewPlatform, already marked as
within the region, is no longer in that region.
Note: This semantic guarantees that an Exit condition gets engaged if its corre-
sponding Entry condition was engaged.
Constructors
Methods
This method returns the Bounds object used in creating this WakeupCriterion.
9.5.3.14 WakeupOnTransformChange
The WakeupOnTransformChange object specifies a wakeup when the transform
within a specified TransformGroup changes.
Constructors
Methods
This method returns the TransformGroup node used in creating this WakeupCri-
terion.
9.5.3.15 WakeupAnd
The WakeupAnd class specifies any number of wakeup conditions ANDed
together. This WakeupCondition object specifies that Java 3D should awaken this
Behavior when all of the WakeupCondition’s constituent wakeup criteria become
valid.
Constructors
This constructor creates a WakeupAnd object that informs the Java 3D sched-
uler to wake up this Behavior object when all the conditions specified in the
array of WakeupCriterion objects have become valid.
9.5.3.16 WakeupOr
The WakeupOr class specifies any number of wakeup conditions ORed together.
This WakeupCondition object specifies that Java 3D should awaken this Behav-
ior when any of the WakeupCondition’s constituent wakeup criteria becomes
valid.
Constructors
This constructor creates a WakeupOr object that informs the Java 3D scheduler
to wake up this Behavior object when any condition specified in the array of
WakeupCriterion objects becomes valid.
9.5.3.17 WakeupAndOfOrs
The WakeupAndOfOrs class specifies any number of OR wakeup conditions
ANDed together. This WakeupCondition object specifies that Java 3D should
awaken this Behavior when all of the WakeupCondition’s constituent WakeupOr
conditions become valid.
Constructors
9.5.3.18 WakeupOrOfAnds
The WakeupOrOfAnds class specifies any number of AND wakeup conditions
ORed together. This WakeupCondition object specifies that Java 3D should
awaken this Behavior when any of the WakeupCondition’s constituent Wakeu-
pAnd conditions becomes valid.
Constructors
Phase α α α α
delay increasing at 1 decreasing at 0
α
Trigger
On the left-hand side, the trigger time defines when this interpolator’s waveform
begins in milliseconds. The region directly to the right of the trigger time,
labeled Phase Delay, defines a time period where the waveform does not change.
During phase delays α is either 0 or 1, depending on which region it precedes.
Phase delays provide an important means for offsetting multiple interpolators
from one another, especially where the interpolators have all the same parame-
ters. The next four regions, labeled α increasing, α at 1, α decreasing, and α at
0, all specify durations for the corresponding values of alpha.
Interpolators have a loop count that determines how many times to repeat the
sequence of α increasing, α at 1, α decreasing, and α at 0; they also have associ-
ated mode flags that enable either the increasing or decreasing portions, or both,
of the waveform.
Developers can use the loop count in conjunction with the mode flags to generate
various kinds of actions. Specifying a loop count of 1 and enabling the mode flag
for only the α-increasing and α-at-1 portion of the waveform, we would get the
waveform shown in Figure 9-2.
Time
Figure 9-2 An Interpolator Set to a Loop Count of 1 with Mode Flags Set to Enable Only
the α-Increasing and α-at-1 Portion of the Waveform
In Figure 9-2, the alpha value is 0 before the combination of trigger time plus the
phase delay duration. The alpha value changes from 0 to 1 over a specified inter-
val of time, and thereafter the alpha value remains 1 (subject to the reprogram-
ming of the interpolator’s parameters). A possible use of a single α-increasing
value might be to combine it with a rotation interpolator to program a door open-
ing.
Similarly, by specifying a loop count of 1 and a mode flag that enables only the
α-decreasing and α-at-0 portion of the waveform, we would get the waveform
shown in Figure 9-3.
1
0
Time
Figure 9-3 An Interpolator Set to a Loop Count of 1 with Mode Flags Set to Enable Only
the α-Decreasing and α-at-0 Portion of the Waveform
In Figure 9-3, the alpha value is 1 before the combination of trigger time plus the
phase delay duration. The alpha value changes from 1 to 0 over a specified inter-
val, and thereafter the alpha value remains 0 (subject to the reprogramming of
the interpolator’s parameters). A possible use of a single α-decreasing value
might be to combine it with a rotation interpolator to program a door closing.
We can combine both of the above waveforms by specifying a loop count of 1
and setting the mode flag to enable both the α-increasing and α-at-1 portion of
the waveform as well as the α-decreasing and α-at-0 portion of the waveform.
This combination would result in the waveform shown in Figure 9-4.
0 0
Time
Figure 9-4 An Interpolator Set to a Loop Count of 1 with Mode Flags Set to Enable All Por-
tions of the Waveform
In Figure 9-4, the alpha value is 0 before the combination of trigger time plus the
phase delay duration. The alpha value changes from 0 to 1 over a specified
period of time, remains at 1 for another specified period of time, then changes
from 1 to 0 over a third specified period of time, and thereafter the alpha value
remains 0 (subject to the reprogramming of the interpolator’s parameters). A
possible use of an α-increasing followed by an α-decreasing value might be to
combine it with a rotation interpolator to program a door swinging open and then
closing.
By increasing the loop count, we can get repetitive behavior, such as a door
swinging open and closed some number of times. At the extreme, we can specify
a loop count of −1 (representing infinity).
1 1 1 1 1 1 1
0 0 0 0 0 0 0
Time
Figure 9-5 An Interpolator Set to Loop Infinitely and Mode Flags Set to Enable Only the α-
Increasing and α-at-1 Portion of the Waveform
In Figure 9-5, alpha goes from 0 to 1 over a fixed duration of time, stays at 1 for
another fixed duration of time, and then repeats.
Similarly, Figure 9-6 shows a looping interpolator with mode flags set to enable
only the α-decreasing and α-at-0 portion of the waveform.
1 1 1 1 1 1 1
0 0 0 0 0 0 0
Time
Figure 9-6 An Interpolator Set to Loop Infinitely and Mode Flags Set to Enable Only the α-
Decreasing and α-at-0 Portion of the Waveform
Finally, Figure 9-7 shows a looping interpolator with both the increasing and
decreasing portions of the waveform enabled.
1 1 1 1
0 0 0 0 0
Time
Figure 9-7 An Interpolator Set to Loop Infinitely and Mode Flags Set to Enable All Por-
tions of the Waveform
In all three cases shown by Figure 9-5, Figure 9-6, and Figure 9-7, we can com-
pute the exact value of alpha at any point in time.
α Acceleration
α Velocity
1 1 1
α Value
0 0 0
Constants
These flags specify that this alpha’s mode is to use the increasing or decreasing
component of the alpha, respectively.
Constructors
public Alpha()
public Alpha(int loopCount, long increasingAlphaDuration)
public Alpha(int loopCount, long triggerTime,
long phaseDelayDuration, long increasingAlphaDuration,
long increasingAlphaRampDuration, long alphaAtOneDuration)
public Alpha(int loopCount, int mode, long triggerTime,
long phaseDelayDuration, long increasingAlphaDuration,
long increasingAlphaRampDuration,
long alphaAtOneDuration, long decreasingAlphaDuration,
long decreasingAlphaRampDuration,
long alphaAtZeroDuration)
The first form constructs a new Alpha object using default values. The remaining
forms construct a new Alpha object using the specified parameters to define the
alpha phases for the object. The default values for the parameters not specified
by the constructors are as follows:
loopCount: –1
mode: INCREASING_ENABLE
triggerTime: 0
phaseDelayDuration: 0
increasingAlphaDuration: 1000
increasingAlphaRampDuration: 0
alphaAtOneDuration: 0
decreasingAlphaDuration: 0
decreasingAlphaRampDuration: 0
alphaAtZeroDuration: 0
Methods
These methods return the alpha value (between 0.0 and 1.0 inclusive) based on
the time-to-alpha parameters established for this interpolator. The first method
returns the alpha for the current time. The second method returns the alpha for an
arbitrary given time. If the alpha mapping has not started, the starting alpha value
is returned. If the alpha mapping has completed, the ending alpha value is
returned.
These methods set and retrieve this alpha’s start time, the base for all relative
time specifications.
244 Java 3D API Specification
BEHAVIORS AND INTERPOLATORS The Alpha Class 9.6.3
These methods set and retrieve this alpha’s mode, which defines which of the
alpha regions are active. The mode is one of the following values: INCREASING_
ENABLE, DECREASING_ENABLE, or both (when both of these modes are ORed
together).
If the mode is INCREASING_ENABLE, the increasingAlphaDuration, increas-
ingAlphaRampDuration, and alphaAtOneDuration are active. If the mode is
DECREASING_ENABLE, the decreasingAlphaDuration, decreasingAlphaRamp-
Duration, and alphaAtZeroDuration are active. If the mode is both constants
ORed, all regions are active. Active regions are all preceded by the phase delay
region.
These methods set and retrieve this alpha’s phase delay duration.
This method returns true if this Alpha object is past its activity window, that is,
if it has finished all its looping activity. This method returns false if this Alpha
object is still active.
Constants
This is the default WakeupCondition for all interpolators. The wakeupOn method
of Behavior, which takes a WakeupCondition as the method parameter, will need
to be called at the end of the processStimulus method of any class that sub-
classes Interpolator. This is done with the following method call:
wakeupOn(defaultWakeupCriterion);
Constructors
The Interpolator behavior class has the following constructors.
public Interpolator()
public Interpolator(Alpha alpha)
The first form constructs and initializes a new Interpolator with default values.
The second form provides the common initialization code for all specializations
of Interpolator.
Methods
These methods set and retrieve this interpolator’s Alpha object. Setting it to null
causes the Interpolator to stop running.
These methods set and retrieve this Interpolator’s enabled state—the default is
enabled.
This is the generic predefined interpolator initialize method. It sets the inter-
polator start time to the current time and schedules the behavior to awaken at the
next frame.
Constructors
The PositionInterpolator object specifies the following constructors.
Constructs and initializes a new PositionInterpolator that varies the target Trans-
formGroup node’s translational component (startPosition and endPosition).
The axisOfTranslation parameter specifies the transform that defines the local
coordinate system in which this interpolator operates. The translation is done
along the X-axis of this local coordinate system.
Methods
The PositionInterpolator object specifies the following methods.
These two methods set and get the Interpolator’s start position.
These two methods set and get the Interpolator’s end position.
These two methods set and get the Interpolator’s target TransformGroup node.
These two methods set and get the Interpolator’s axis of translation.
This method is invoked by the behavior scheduler every frame. It maps the alpha
value that corresponds to the current time into a translation value, computes a
transform based on this value, and updates the specified TransformGroup node
with this new transform.
interpolated angle is used to generate a rotation transform about the local Y-axis
of this interpolator.
Constructors
Methods
These two methods set and get the interpolator’s minimum rotation angle, in
radians.
These two methods set and get the interpolator’s maximum rotation angle, in
radians.
These two methods set and get the interpolator’s axis of rotation.
These two methods set and get the interpolator’s target TransformGroup node.
This method is invoked by the behavior scheduler every frame. It maps the alpha
value that corresponds to the current time into a rotation angle, computes a trans-
form based on this angle, and updates the specified TransformGroup node with
this new transform.
Constructors
Constructs a new ColorInterpolator object that varies the target material between
two color values (startColor and endColor).
Methods
These two methods set and get the interpolator’s start color.
These two methods set and get the interpolator’s end color.
These two methods set and get the interpolator’s target Material component
object.
This method is invoked by the behavior scheduler every frame. It maps the alpha
value that corresponds to the current time into a color value and updates the
specified Material object with this new color value.
Constructors
Constructs a trivial scale interpolator that varies its target TransformGroup node
between the two scale values, using the specified alpha, an identity matrix, a
minimum scale of 0.1, and a maximum scale of 1.0.
Methods
These two methods set and get the interpolator’s minimum scale.
These two methods set and get the interpolator’s maximum scale.
These two methods set and get the interpolator’s axis of scale.
These two methods set and get the interpolator’s target TransformGroup node.
This method is invoked by the behavior scheduler every frame. It maps the alpha
value that corresponds to the current time into a scale value, computes a trans-
form based on this value, and updates the specified TransformGroup node with
this new transform.
This is a callback method used to allow a node to check if any nodes referenced
by that node have been duplicated via a call to cloneTree. This method is called
by the cloneTree method after all nodes in the subgraph have been duplicated.
The cloned leaf node’s method will be called and the leaf node can then look up
any node references by using the getNewNodeReference method found in the
NodeReferenceTable object. If a match is found, a reference to the corresponding
node in the newly cloned subgraph is returned. If no corresponding reference is
found, either a DanglingReferenceException is thrown or a reference to the
original node is returned, depending on the value of the allowDanglingRefer-
ences parameter passed in the cloneTree call.
Constructors
index of the first children in the Switch node to select, and lastChildIndex, the
index of the last children in the Switch node to select).
Methods
These two methods set and get the interpolator’s first child index.
These two methods set and get the interpolator’s last child index.
These two methods set and get the interpolator’s target Switch node.
This method is invoked by the behavior scheduler every frame. It maps the alpha
value that corresponds to the current time into a child index value and updates
the specified Switch node with this new child index value.
Constructors
Methods
These two methods set and get the interpolator’s minimum transparency.
These two methods set and get the interpolator’s maximum transparency.
These two methods set and get the interpolator’s target TransparencyAttributes
component object.
This method is invoked by the behavior scheduler every frame. It maps the alpha
value that corresponds to the current time into a transparency value and updates
the specified TransparencyAttributes object with this new transparency value.
Constructors
Methods
This method retrieves the lengths of the interpolator’s knots and positions arrays.
These two methods set and get the interpolator’s indexed position.
These two methods set and get the interpolator’s indexed knot value.
These two methods set and get the interpolator’s axis of translation.
These two methods set and get the interpolator’s target TransformGroup object.
This method is invoked by the behavior scheduler every frame. It maps the alpha
value that corresponds to the current time into a translation value, computes a
transform based on this value, and updates the specified TransformGroup node
with this new transform.
Constructors
Methods
This method retrieves the lengths of the interpolator’s knots, positions, and quats
arrays.
These two methods set and get the interpolator’s indexed quaternion value.
These two methods set and get the interpolator’s indexed position.
These two methods set and get the interpolator’s indexed knot value.
These two methods set and get the interpolator’s axis of rotation and translation.
These two methods set and get the interpolator’s target TransformGroup object.
This method is invoked by the behavior scheduler every frame. It maps the alpha
value that corresponds to the current time into translation and rotation values,
computes a transform based on these values, and updates the specified Trans-
formGroup node with this new transform.
Constructors
ifies an array of knot values that specifies a spline. The quats parameter specifies
an array of quaternion values at the knots. The positions parameter specifies an
array of position values at the knots. The scale parameter specifies the scale
component value.
Methods
This method retrieves the lengths of the interpolator’s knots and positions arrays.
These two methods set and get the interpolator’s indexed scale value.
These two methods set and get the interpolator’s indexed quaternion value.
These two methods set and get the interpolator’s indexed position.
These two methods set and get the interpolator’s indexed knot value.
These two methods set and get the interpolator’s axis of rotation, translation, and
scale.
These two methods set and get the interpolator’s target TransformGroup object.
This method is invoked by the behavior scheduler every frame. It maps the alpha
value that corresponds to the current time into translation, rotation, and scale val-
ues, computes a transform based on these values, and updates the specified
TransformGroup node with this new transform.
Constructors
Methods
This method retrieves the lengths of the interpolator’s knots and positions arrays.
These two methods set and get the interpolator’s indexed quaternion value.
These two methods set and get the interpolator’s indexed knot value.
These two methods set and get the interpolator’s axis of rotation.
These two methods set and get the interpolator’s target TransformGroup object.
This method is invoked by the behavior scheduler every frame. It maps the alpha
value that corresponds to the current time into a rotation angle, computes a trans-
form based on this angle, and updates the specified TransformGroup node with
this new transform.
Constructors
public LOD()
Methods
The LOD node class defines the following methods.
The addSwitch method appends the specified Switch node to this LOD’s list of
switches. The setSwitch method replaces the specified Switch node with the
Switch node provided. The insertSwitch method inserts the specified Switch
node at the specified index. The removeSwitch method removes the Switch node
at the specified index. The getSwitch method returns the Switch node specified
by the index. The numSwitches method returns a count of this LOD’s switches.
Constructors
public DistanceLOD()
public DistanceLOD(float distances[])
Methods
The numDistances method returns a count of the number of LOD distance cutoff
parameters. The getDistance method returns a particular LOD cutoff distance.
The setDistance method sets a particular LOD cutoff distance.
Constants
The Billboard class adds the following new constants.
Specifies that rotation should be about the specified point and that the children’s
Y-axis should match the ViewPlatform’s Y-axis.
Constructors
The Billboard class specifies the following constructors.
public Billboard()
The first constructor constructs a Billboard behavior node with default parame-
ters that operates on the specified target TransformGroup node. The default
alignment mode is ROTATE_ABOUT_AXIS, with the axis along the Y-axis. The next
two constructors construct a Billboard behavior node with the specified axis and
mode that operates on the specified TransformGroup node. The axis parameter
specifies the ray about which the billboard rotates. The point parameter specifies
the position about which the billboard rotates. The mode parameter is the align-
ment mode and is either ROTATE_ABOUT_AXIS or ROTATE_ABOUT_POINT.
Methods
The Billboard class defines the following methods.
These methods set or retrieve the target TransformGroup node for this Billboard
object.
The first two methods set the rotation point. The third method gets the rotation
point and sets the parameter to this value.
JAVA 3D provides access to keyboards and mice using the standard Java API
for keyboard and mouse support. Additionally, Java 3D provides access to a vari-
ety of continuous-input devices such as six-degrees-of-freedom (6DOF) trackers
and joysticks.
Continuous-input devices like 6DOF trackers and joysticks have well defined
continuous inputs. Trackers produce a position and orientation that Java 3D
stores internally as a transformation matrix. Joysticks produce two continuous
values in the range [–1.0, 1.0] that Java 3D stores internally as a transformation
matrix with an identity rotation (no rotation) and one of the joystick values as the
X translation and the other as the Y translation component.
Unfortunately, continuous-input devices do not have the same level of consis-
tency when it comes to their associated switches or buttons. Still, the number of
buttons or switches attached to a particular sensing element remains constant
across all sensing elements associated with a single device.
ble duty. They not only represent actual physical detectors but they also serve as
abstract six-degrees-of-freedom transformations that a Java 3D application can
access. The Sensor class is described in more detail in Section 10.2.3, “The Sen-
sor Object.”
Constants
These flags specify whether the associated device works in polled mode or
streaming mode.
Methods
This method returns the number of Sensor objects associated with this device.
This method returns the specified Sensor associated with this device.
This method sets the device’s current position and orientation as the device’s
nominal position and orientation (that is, establishes its reference frame relative
to the “tracker base” reference frame). This method is most useful in defining a
nominal pose in immersive head-tracked situations.
This method first polls the device for data values and then processes the values
received from the device.
10.2 Sensors
The Java 3D API provides only an abstract concept of a device. Rather than
focusing on issues of devices and device models, it instead defines the concept of
a sensor. A sensor consists of a timestamped sequence of input values and the
state of the buttons or switches at the time that Java 3D sampled the value. A
sensor also contains a hotspot offset specified in that sensor’s local coordinate
system. If not specified, the hotspot is (0.0, 0.0, 0.0).
Since a typical hardware environment contains multiple sensing elements,
Java 3D maintains an array of sensors. Users can access a sensor directly from
their Java code or they can assign a sensor to one of Java 3D’s predefined 6DOF
entities such as UserHead.
Constants
The Sensor object specifies the following constants.
These flags define the Sensor’s predictor type. The first flag defines no predic-
tion. The second flag specifies to generate the value to correspond with the next
frame time.
These flags define the Sensor’s predictor policy. The first flag specifies to use no
prediction policy. The second flag specifies to assume that the sensor is predict-
ing head position or orientation. The third flag specifies to assume that the sensor
is predicting hand position or orientation.
Constructors
The Sensor object specifies the following constructors.
These methods construct a new Sensor object associated with the specified
device and consisting of either a default number of SensorReads or sensorRead-
Count number of SensorReads and a hot spot at (0.0, 0.0, 0.0) specified in the
sensor’s local coordinate system. The default for sensorButtonCount is zero.
These methods construct a new Sensor object associated with the specified
device and consisting of either sensorReadCount number of SensorReads or a
default number of SensorReads and an offset defining the sensor’s hot spot in the
sensor’s local coordinate system. The default for sensorButtonCount is zero.
Methods
These methods set and retrieve the number of SensorRead objects associated
with this sensor and the number of buttons associated with this sensor. Both the
Version 1.1 Alpha 01, February 27, 1998 269
10.2.3 The Sensor Object INPUT DEVICES AND PICKING
number of SensorRead objects and the number of buttons are determined at Sen-
sor construction time.
These methods set and retrieve the sensor’s hotspot offset. The hotspot is speci-
fied in the sensor’s local coordinate system.
These methods extract the most recent sensor reading and the kth most recent
sensor reading from the Sensor object. In both cases, the methods copy the sen-
sor value into the specified argument.
The first method computes the sensor reading consistent with the prediction pol-
icy and copies that value into the read matrix. The second method computes the
sensor reading consistent as of time deltaT in the future and copies that value
into the read matrix. All times are in milliseconds.
These methods return the time associated with the most recent sensor reading
and with the kth most recent sensor reading, respectively.
These methods return the state of the buttons associated with the most recent
sensor reading and the kth most recent sensor reading, respectively.
These methods set and retrieve the sensor’s predictor type. The predictor type is
one of the following: NO_PREDICTOR, HEAD_PREDICTOR, or HAND_PREDICTOR.
These methods set and retrieve the sensor’s predictor policy. The predictor policy
is either PREDICT_NONE or PREDICT_NEXT_FRAME_TIME.
This method returns the current number of SensorRead objects per sensor.
This method sets the next SensorRead object to the specified values, including
the next SensorRead’s associated time, transformation, and button state array.
Constants
Constructors
The SensorRead object specifies the following constructor.
public SensorRead()
Methods
These methods set and retrieve the SensorRead object’s transform. They allow a
device to store a new rotation and orientation value into the SensorRead object,
and a consumer of that value to access it.
These methods set and retrieve the SensorRead object’s timestamp. They allow a
device to store a new timestamp value into the SensorRead object, and a con-
sumer of that value to access it.
These methods set and retrieve the SensorRead object’s button values. They
allow a device to store an integer that encodes the button values into the Sensor-
Read object, and a consumer of those values to access the state of the buttons.
10.3 Picking
Behavior nodes provide the means for building developer-specific picking
semantics. An application developer can define custom picking semantics using
Java 3D’s behavior mechanism (see Chapter 9, “Behaviors and Interpolators”).
The developer might wish to define pick semantics that use a mouse to shoot a
ray into the virtual universe from the current viewpoint, find the first object along
that ray, and highlight that object when the end user releases the mouse button. A
typical scenario follows:
1. The application constructs a Behavior node that arms itself to awaken
when AWT detects a left-mouse-button-down event.
2. Upon awakening from a left-mouse-button-down event, the behavior
a. Updates a Switch node to draw a ray that emanates from the center of
the screen.
b. Changes that ray’s TransformGroup node so that the ray points in the
direction of the current mouse position.
c. Declares its interest in mouse-move or left-mouse-button-up events.
3. Upon awakening from a mouse-move event, the behavior
a. Changes that ray’s TransformGroup node so that the ray points in the
direction of the current mouse position.
b. Declares its interest in mouse-move or left-mouse-button-up events.
4. Upon awakening from a left-mouse-button-up event, the behavior
a. Changes that ray’s TransformGroup node so that the ray points in the
direction of the current mouse position.
b. Intersects the ray with all the objects in the virtual universe to find the
first object that the ray intersects.
c. Changes the appearance component of that object’s shape node to
highlight the selected object.
d. Declares its interest in left-mouse-button-down events.
Java 3D includes helping functions that aid in intersecting various geometric
objects with objects in the virtual universe by
• Intersecting an oriented ray with all the objects in the virtual universe. That
function can return the first object intersected along that ray, all the objects
that intersect that ray, or a list of all the objects along that ray sorted by dis-
tance from the ray’s origin.
• Intersecting a volume with all the objects in the virtual universe. That func-
tion returns a list of all the objects contained in that volume.
• Discovering which vertex within an object is closest to a specified ray.
Constructors
public SceneGraphPath()
public SceneGraphPath(Locale root, Node object)
public SceneGraphPath(Locale root, Node nodes[], Node object)
These construct and initialize a new SceneGraphPath object. The first form uses
default values. The second form specifies the path’s Locale object and the object
in question. The third form includes an array of nodes that fall in between the
Locale and the object in question, and which nodes have their ENABLE_PICK_
REPORTING capability bit set. The object parameter may be a Group, Shape3D, or
Morph node. If any other type of leaf node is specified, an IllegalArgument-
Exception is thrown.
Methods
These methods set the path’s values. The first method sets the path’s interior val-
ues. The second method sets the path’s Locale to the specified Locale. The third
method sets the path’s object to the specified object (a Group node, or a Shape3D
or Morph leaf node). The fourth method replaces the link node associated with
the specified index with the specified newLink. The last method replaces all of
the link nodes with the new list of link nodes.
The first method returns the path’s Locale. The second method returns the path’s
object.
The first method returns the number of intermediate nodes in this path. The sec-
ond method returns the node associated with the specified index.
This method returns a copy of the transform associated with this SceneGraph-
Path. The method returns null if there is no transform associated. If this
SceneGraphPath was returned by a Java 3D picking and collision method, the
local-coordinate-to-virtual-coordinate transform for this scene graph object at the
time of the pick or collision is recorded.
This method determines whether two SceneGraphPath objects represent the same
path in the scene graph. Either object might include a different subset of internal
nodes; only the internal link nodes, the Locale, and the Node itself are compared.
The paths are not validated for correctness or uniqueness.
public boolean equals(SceneGraphPath testPath)
This method returns true if all of the data members of path testPath are equal
to the corresponding data members in this SceneGraphPath.
This method returns a hash number based on the data values in this object. Two
different SceneGraphPath objects with identical data values (that is,
trans.equals(SceneGraphPath) returns true) will return the same hash num-
ber. Two paths with different data members may return the same hash value,
although this is not likely.
This method returns a string representation of this object. The string contains the
class names of all nodes in the SceneGraphPath.
Constructors
public PickPoint()
public PickPoint(Point3d location)
The first constructor creates a PickPoint initialized to (0,0,0). The second con-
structor creates a PickPoint at the specified location.
Methods
Constructors
public PickRay()
public PickRay(Point3d origin, Vector3d direction)
The first constructor creates a PickRay initialized with an origin and direction of
(0,0,0). The second constructor creates a PickRay cast from the specified origin
and direction.
Methods
These methods set and retrieve the ray to point from the specified origin in the
specified direction.
Constructors
public PickSegment()
public PickSegment(Point3d start, Point3d end)
The first constructor creates a PickSegment object with the start and end of the
segment initialized to (0,0,0). The second constructor creates a PickSegment
object from the specified start and end points.
Methods
These methods set and return the line segment from the start point to the end
point.
ing, setting particular audio device elements, and querying generic character-
istics for any audio device.
Constants
Specifies that audio playback will be through a single speaker some distance
away from the listener.
Specifies that audio playback will be through stereo speakers some distance
away from, and at some angle to, the listener.
11.1.1 Initialization
Each audio device driver must be initialized. The chosen device driver should be
initialized before any Java 3D Sound methods are executed because the imple-
mentation of the Sound methods, in general, is potentially device-driver depen-
dent.
Methods
Initialize the audio device. Exactly what occurs during initialization is imple-
mentation dependent. This method provides explicit control by the user over
when this initialization occurs.
public abstract boolean close()
Closes the audio device, releasing resources associated with this device.
• A monaural speaker.
• A pair of speakers, equally distant from the listener, both at some angle
from the head coordinate system Z axis. It’s assumed that the speakers are
at the same elevation and oriented symmetrically about the listener.
The type of playback chosen affects the sound image generated. Cross-talk can-
cellation is applied to the audio image if playback over stereo speakers is
selected.
Methods
The following methods affect the playback of sound processed by the Java 3D
sound renderer.
These methods set and retrieve the type of audio playback device (HEADPHONES,
MONO_SPEAKER, or STEREO_SPEAKERS) used to output the analog audio from ren-
dering Java 3D Sound nodes.
These methods set and retrieve the distance in meters from the center ear (the
midpoint between the left and right ears) and one of the speakers in the listener’s
environment. For monaural speaker playback, a typical distance from the listener
to the speaker in a workstation cabinet is 0.76 meters. For stereo speakers placed
at the sides of the display, this might be 0.82 meters.
These methods set and retrieve the angle, in radians, between the vectors from
the center ear to each of the speaker transducers and the vectors from the center
ear parallel to the head coordinate’s Z axis. Speakers placed at the sides of the
computer display typically range between 0.175 and 0.350 radians (between 10
and 20 degrees).
Methods
This method retrieves the maximum number of channels available for Java 3D
sound rendering for all sound sources.
During rendering, when Sound nodes are playing, this method returns the num-
ber of channels still available to Java 3D for rendering additional Sound nodes.
This is a deprecated method. This method is now part of the Sound class.
JAVA 3D’s execution and rendering model assumes the existence of a Virtu-
alUniverse object and an attached scene graph. This scene graph can be minimal
and not noticeable from an application’s perspective when using immediate-
mode rendering, but it must exist.
Java 3D’s execution model intertwines with its rendering modes and with behav-
iors and their scheduling. This chapter first describes the three rendering modes,
then describes how an application starts up a Java 3D environment, and finally, it
discusses how the various rendering modes work within this framework.
A pure immediate mode application must create a minimal set of Java 3D objects
before rendering. In addition to a Canvas3D object, the application must create a
View object, with its associated PhysicalBody and PhysicalEnvironment objects,
and the following scene graph elements: a VirtualUniverse object, a high-resolu-
tion Locale object, a BranchGroup node object, a TransformGroup node object
with associated transform and, finally, a ViewPlatform leaf node object that
defines the position and orientation within the virtual universe that generates the
view (see Figure 13-1).
Virtual Universe
Hi-Res Locale
BranchGroup BG
TransformGroup Screen3D
TG
Physical Physical
Body Environment
Java 3D provides utility functions that create much of this structure on behalf of
a pure immediate-mode application, making it less noticeable from the applica-
tion’s perspective—but the structure must exist.
All rendering is done completely under user control. It is necessary for the user
to clear the 3D canvas, render all geometry, and swap the buffers. Additionally,
rendering the right and left eye for stereo viewing becomes the sole responsibil-
ity of the application.
In pure immediate mode, the user must stop the Java 3D renderer, via the
Canvas3D object stopRenderer() method, prior to adding the Canvas3D object
to an active View object (that is, one that is attached to a live ViewPlatform
object).
clear canvas
call preRender() // user-supplied method
set view
render opaque scene graph objects
call renderField(FIELD_ALL) // user-supplied method
render transparent scene graph objects
call postRender() // user-supplied method
synchronize and swap buffers
call postSwap() // user-supplied method
In both cases, the entire loop, beginning with clearing the canvas and ending with
swapping the buffers, defines a frame. The application is given the opportunity to
render immediate-mode geometry at any of the clearly identified spots in the ren-
dering loop. A user specifies his or her own rendering methods by extending the
Canvas3D class and overriding the preRender, postRender, postSwap, and/or
renderField methods.
Constants
These constants specify the field that the rendering loop for this Canvas3D is
rendering. The FIELD_LEFT and FIELD_RIGHT values indicate the left and right
fields of a field-sequential stereo rendering loop, respectively. The FIELD_ALL
value indicates a monoscopic or single-pass stereo rendering loop.
Methods
Applications that wish to perform operations in the rendering loop prior to any
actual rendering must override this method. The Java 3D rendering loop invokes
this method after clearing the canvas and before any rendering has been done for
this frame.
Applications that wish to perform operations in the rendering loop following any
actual rendering must override this method. The Java 3D rendering loop invokes
this method after completing all rendering to the canvas for this frame and before
the buffer swap.
Applications that wish to perform operations at the very end of the rendering
loop must override this method. The Java 3D rendering loop invokes this method
after completing all rendering to this canvas, and all other canvases associated
with the current view, for this frame following the buffer swap.
Applications that wish to perform operations during the rendering loop must
override this function. The Java 3D rendering loop invokes this method, possibly
twice, during the loop. It is called once for each field (once per frame on a mono-
scopic system or once each for the right eye and left eye on a field-sequential ste-
reo system). This method is called after all opaque objects are rendered and
before any transparent objects are rendered (subject to restrictions imposed by
OrderedGroup nodes). This is intended for use by applications that want to mix
retained/compiled-retained mode rendering with some immediate-mode render-
ing. The fieldDesc parameter is the field description: FIELD_LEFT, FIELD_
RIGHT, or FIELD_ALL. Applications that wish to work correctly in stereo mode
should render the same image for both FIELD_LEFT and FIELD_RIGHT calls. If
Java 3D calls the renderer with FIELD_ALL, the immediate-mode rendering only
needs to be done once.
These methods start or stop the Java 3D renderer for this Canvas3D object. If the
Java 3D renderer is currently running when stopRenderer is called, the render-
ing will be synchronized before being stopped. No further rendering will be done
to this canvas by Java 3D until the renderer is started again. If the Java 3D ren-
derer is not currently running when startRenderer is called, any rendering to
other Canvas3D objects sharing the same View will be synchronized before this
Canvas3D’s renderer is (re)started.
This method synchronizes and swaps buffers on a double-buffered canvas for this
Canvas3D object. This method may only be called if the Java 3D renderer has
been stopped. In the normal case, the renderer automatically swaps the buffer. If
the application invokes this method and the canvas has a running Java 3D ren-
derer, a RestrictedAccessException exception is thrown.
13.3.1 GraphicsContext3D
The GraphicsContext3D object is used for immediate-mode rendering into a 3D
canvas. It is created by, and associated with, a specific Canvas3D object. A
GraphicsContext3D class defines methods that manipulate 3D graphics state
attributes and draw 3D geometric primitives.
Constructors
There are no publicly accessible constructors of GraphicsContext3D. An applica-
tion obtains a 3D graphics context object from the Canvas3D object into which
the application wishes to render by using the getGraphicsContext3D method.
The Canvas3D object creates a new GraphicsContext3D the first time an applica-
tion invokes getGraphicsContext3D. A new GraphicsContext3D initializes its
state variables to the following defaults:
Background object: null
Fog object: null
Appearance object: null
List of Light objects: empty
High-Res coordinates: (0, 0, 0)
modelTransform: identity
AuralAttributes object: null
List of Sound objects: empty
Methods
These methods access or modify the current Appearance component object used
by this 3D graphics context. The graphics context stores a reference to the spec-
ified Appearance object. This means that the application may modify individual
These methods access or modify the current Background leaf node object used
by this 3D graphics context. The graphics context stores a reference to the spec-
ified Background node. This means that the application may modify the back-
ground color or image by using the appropriate methods on the Background node
object (see Section 5.4, “Background Node”). The Background node must not be
part of a live scene graph, nor may it subsequently be made part of a live scene
graph—an IllegalSharingException is thrown in such cases. If the Back-
ground object is null, the default background color of black (0,0,0) is used to
clear the canvas prior to rendering a new frame. The Background node’s applica-
tion region is ignored for immediate-mode rendering.
These methods access or modify the current Fog leaf node object used by this 3D
graphics context. The graphics context stores a reference to the specified Fog
node. This means that the application may modify the fog attributes using the
appropriate methods on the Fog node object (see Section 5.6, “Fog Node”). The
Fog node must not be part of a live scene graph, nor may it subsequently be
made part of a live scene graph—an IllegalSharingException is thrown in
such cases. If the Fog object is null, fog is disabled. Both the region of influence
and the hierarchical scope of the Fog node are ignored for immediate-mode ren-
dering.
These methods access or modify the list of lights used by this 3D graphics con-
text. The addLight method adds a new light to the end of the list of lights. The
insertLight method inserts a new light before the light at the specified index.
The setLight method replaces the light at the specified index with the light pro-
vided. The removeLight method removes the light at the specified index. The
numLights method returns a count of the number of lights in the list. The
getLight method returns the light at the specified index. The getAllLights
method retrieves the Enumeration object of all lights.
The graphics context stores a reference to each light object in the list of lights.
This means that the application may modify the light attributes for any of the
lights using the appropriate methods on that Light node object (see Section 5.7,
“Light Node”). None of the Light nodes in the list of lights may be part of a live
scene graph, nor may they subsequently be made part of a live scene graph—an
IllegalSharingException is thrown in such cases. Adding a null Light object
to the list will result in a NullPointerException. Both the region of influence
and the hierarchical scope of all lights in the list are ignored for immediate-mode
rendering.
These methods access or modify the current model transform. The multiply-
ModelTransform method multiplies the current model transform by the specified
transform and stores the result back into the current model transform. The speci-
fied transformation must be affine. A BadTransformException is thrown (see
Section D.1, “BadTransformException”) if an attempt is made to specify an ille-
gal Transform3D.
This method reads an image from the frame buffer and copies it into the Image-
Component or DepthComponent objects referenced by the specified Raster
object. All parameters of the Raster object and the component ImageComponent
or DepthComponent objects must be set to the desired values prior to calling this
method. These values determine the location, size, and format of the pixel data
that is read.
This method clears the canvas to the color or image specified by the current
Background leaf node object.
The first draw method draws the specified Geometry component object using the
current state in the graphics context. The second draw method draws the speci-
fied Shape3D leaf node object. This is a convenience method that is identical to
calling the setAppearance(Appearance) and draw(Geometry) methods passing
the Appearance and Geometry component objects of the specified Shape3D
nodes as arguments.
These methods access or modify the list of sounds used by this 3D graphics con-
text. The addSound method appends the specified sound to this graphics con-
text’s list of sounds. The insertSound method inserts the specified sound at the
specified index location. The setSound method replaces the specified sound with
the sound provided. The removeSound method removes the sound at the speci-
fied index location. The numSounds method retrieves the current number of
sounds in this graphics context. The getSound method retrieves the index-
selected sound. The isSoundPlaying method retrieves the sound-playing flag.
The getAllSounds method retrieves the Enumeration object of all the sounds.
The graphics context stores a reference to each sound object in the list of sounds.
This means that the application may modify the sound attributes for any of the
sounds by using the appropriate methods on that Sound node object (see
Section 5.8, “Sound Node”). None of the Sound nodes in the list of sounds may
be part of a live scene graph, nor may they subsequently be made part of a live
scene graph—an IllegalSharingException is thrown in such cases. Adding a
null Sound object to the list results in a NullPointerException. If the list of
sounds is empty, sound rendering is disabled.
Adding or inserting a sound to the list of sounds implicitly starts the sound play-
ing. Once a sound is finished playing, it can be restarted by setting the sound’s
enable flag to true. The scheduling region of all sounds in the list is ignored for
immediate-mode rendering.
Variables
The component values of a Tuple2f are directly accessible through the public
variables x and y. To access the x component of a Tuple2f called upperLeftCor-
ner, a programmer would write upperLeftCorner.x. The programmer would
access the y component similarly.
Tuple Objects
Tuple2f
Point2f
TexCoord2f
Vector2f
Tuple3b
Color3b
Tuple3d
Point3d
Vector3d
Tuple3f
Color3f
Point3f
TexCoord3f
Vector3f
Tuple4b
Color4b
Tuple4d
Point4d
Quat4d
Vector4d
Tuple4f
Color4f
Point4f
Quat4f
Vector4f
AxisAngle4d
AxisAngle4f
GVector
Matrix Objects
Matrix3f
Matrix3d
Matrix4f
Matrix4d
GMatrix
public float x
public float y
Constructors
These four constructors each return a new Tuple2f. The first constructor gener-
ates a Tuple2f from two floating-point numbers x and y. The second constructor
generates a Tuple2f from the first two elements of array t. The third constructor
generates a Tuple2f from the tuple t1. The final constructor generates a Tuple2f
with the value of (0.0, 0.0).
Methods
The set methods set the value of tuple this to the values provided. The get
method copies the values of the elements of this tuple into the array t.
The first add method computes the element-by-element sum of tuples t1 and t2,
placing the result in this. The second add method computes the element-by-ele-
ment sum of this tuple and tuple t1, placing the result in this. The first sub
method performs an element-by-element subtraction of tuple t2 from tuple t1
and places the result in this (this = t1 – t2). The second sub method performs an
element-by-element subtraction of t1 from this and places the result in this
(this = this – t1).
The first negate method sets the values of this tuple to the negative of the values
from tuple t1. The second negate method negates the tuple this and places the
resulting tuple back into this.
The first scale method multiplies each element of the tuple t1 by the scale fac-
tor s and places the resulting scaled tuple into this. The second scale method
Version 1.1 Alpha 01, February 27, 1998 301
A.1.1 Tuple2f Class MATH OBJECTS
multiplies each element of this tuple by the scale factor s and places the resulting
scaled tuple into this. The first scaleAdd method scales this tuple by the scale
factor s, adds the result to tuple t1, and places the result into the tuple this (this
= s*this + t1). The second scaleAdd method scales tuple t1 by the scale factor
s, adds the result to tuple t2, then places the result into the tuple this (this =
s*t1 + t2)
The first absolute method sets each component of this tuple to its absolute
value. The second absolute method sets each component of this tuple to the
absolute value of the corresponding component in tuple t.
The first clamp method clamps this tuple to the range [min, max]. The second
clamp method clamps the values from tuple t to the range [min, max] and assigns
these clamped values to this tuple. The first clampMin method clamps each value
of this tuple to the min parameter. The second clampMin method clamps each
value of the tuple t and assigns these clamped values to this tuple. The first
clampMax method clamps each value of this tuple to the max parameter. The sec-
ond clampMax method clamps each value of tuple t to the max parameter and
assigns these clamped values to this tuple. In each method the values of tuple t
remain unchanged.
The first method linearly interpolates between tuples t1 and t2 and places the
result into this tuple (this = alpha * t1 + (1 – alpha) * t2). The second method lin-
early interpolates between this tuple and tuple t1 and places the result into this
tuple (this = alpha * this + (1 – alpha) * t1).
This method returns true if all of the data members of tuple t1 are equal to the
corresponding data members in this tuple.
This method returns true if the L∞ distance between this tuple and tuple t1 is
less than or equal to the epsilon parameter. Otherwise, this method returns
false. The L∞ distance is equal to
The hashCode method returns a hash number based on the data values in this
object. Two Tuple2f objects with identical data values (that is, equals(Tuple2f)
returns true) will return the same hash number. Two objects with different data
members may return the same hash number, although this is not likely.
This method returns a string that contains the values of this Tuple2f.
Constructors
These four constructors each return a new Point2f. The first constructor generates
a Point2f from two floating-point numbers x and y. The second constructor gen-
erates a Point2f from the first two elements of array p. The third constructor gen-
erates a Point2f from the point p1. The fourth constructor generates a Point2f
from the Tuple2f t1. The final constructor generates a Point2f with the value of
(0.0, 0.0).
Methods
computes the Euclidean distance between this point and point p1 and returns the
result.
This method computes the L1 (Manhattan) distance between this point and point
p1. The L1 distance is equal to
abs ( x1 – x2 ) + abs ( y1 – y2 )
This method computes the L∞ distance between this point and point p1. The L∞
distance is equal to
Constructors
These four constructors each return a new Vector2f. The first constructor gener-
ates a Vector2f from two floating-point numbers x and y. The second constructor
generates a Vector2f from the first two elements of array v. The third constructor
generates a Vector2f from the vector v1. The fourth constructor generates a
Vector2f from the specified Tuple2f. The final constructor generates a Vector2f
with the value of (0.0, 0.0).
Methods
The dot method computes the dot product between this vector and vector v1 and
returns the resulting value.
The lengthSquared method computes the square of the length of the vector
this and returns its length as a single-precision floating-point number. The
length method computes the length of the vector this and returns its length as
a single-precision floating-point number.
The first normalize method normalizes the vector v1 to unit length and places
the result in this. The second normalize method normalizes the vector this
and places the resulting unit vector back into this.
This method returns the angle, in radians, between this vector and vector v1. The
return value is constrained to the range [0, π].
Constructors
These four constructors each return a new TexCoord2f. The first constructor gen-
erates a TexCoord2f from two floating-point numbers x and y. The second con-
structor generates a TexCoord2f from the first two elements of array v. The third
constructor generates a TexCoord2f from the TexCoord2f v1. The fourth con-
structor generates a TexCoord2f from the Tuple2f t1. The final constructor gen-
erates a TexCoord2f with the value of (0.0, 0.0).
Variables
The component values of a Tuple3b are directly accessible through the public
variables x, y, and z. To access the x (red) component of a Tuple3b called
myColor, a programmer would write myColor.x. The programmer would access
the y (green) and z (blue) components similarly.
Note: Java defines a byte as a signed integer in the range [−128, 127]. However,
colors are more typically represented by values in the range [0, 255]. Java 3D rec-
ognizes this and, in those cases where Color3b is used to represent color, treats the
bytes as if the range were [0, 255].
public byte x
public byte y
public byte z
Constructors
These four constructors each return a new Tuple3b. The first constructor gener-
ates a Tuple3b from three bytes b1, b2, and b3. The second constructor generates
a Tuple3b from the first three elements of array t. The third constructor gener-
ates a Tuple3b from the byte-precision Tuple3b t1. The final constructor gener-
ates a Tuple3b with the value of (0.0, 0.0, 0.0).
Methods
This method returns a string that contains the values of this Tuple3b.
The first set method sets the values of the x, y, and z data members of this
Tuple3b to the values in the array t of length three. The second set method sets
the values of the x, y, and z data members of this Tuple3b to the values in the
argument tuple t1. The first get method places the values of the x, y, and z com-
ponents of this Tuple3b into the array t of length three. The second get method
places the values of the x, y, and z components of this Tuple3b into the tuple t1.
public boolean equals(Tuple3b t1)
This method returns true if all of the data members of Tuple3b t1 are equal to
the corresponding data members in this tuple.
This method returns a hash number based on the data values in this object. Two
different Tuple3b objects with identical data values (that is, equals(Tuple3b)
returns true) will return the same hash number. Two tuples with different data
members may return the same hash value, although this is not likely.
Constructors
These four constructors each return a new Color3b. The first constructor gener-
ates a Color3b from three bytes c1, c2, and c3. The second constructor generates
a Color3b from the first three elements of array c. The third constructor gener-
ates a Color3b from the byte-precision Color3b c1. The fourth constructor gener-
ates a Color3b from the tuple t1. The final constructor generates a Color3b with
the value of (0.0, 0.0, 0.0).
Variables
The component values of a Tuple3d are directly accessible through the public
variables x, y, and z. To access the x component of a Tuple3d called upperLeft-
Corner, a programmer would write upperLeftCorner.x. The programmer
would access the y and z components similarly.
public double x
public double y
public double z
Constructors
These five constructors each return a new Tuple3d. The first constructor gener-
ates a Tuple3d from three floating-point numbers x, y, and z. The second con-
structor generates a Tuple3d from the first three elements of array t. The third
constructor generates a Tuple3d from the double-precision Tuple3d t1. The
fourth constructor generates a Tuple3d from the single-precision Tuple3f t1. The
final constructor generates a Tuple3d with the value of (0.0, 0.0, 0.0).
Methods
The four set methods set the value of tuple this to the values specified or to the
values of the specified vectors. The two get methods copy the x, y, and z values
into the array t of length three.
The first add method computes the element-by-element sum of tuples t1 and t2
and places the result in this. The second add method computes the ele-
ment-by-element sum of this tuple and tuple t1 and places the result into this.
The first sub method performs an element-by-element subtraction of tuple t2
from tuple t1 and places the result in this (this = t1 – t2). The second sub
method performs an element-by-element subtraction of tuple t1 from this tuple
and places the result in this (this = this – t1).
The first negate method sets the values of this tuple to the negative of the values
from tuple t1. The second negate method negates the tuple this and places the
resulting tuple back into this.
The first scale method multiplies each element of the tuple t1 by the scale fac-
tor s and places the resulting scaled tuple into this. The second scale method
multiplies each element of this tuple by the scale factor s and places the result-
ing scaled tuple back into this. The first scaleAdd method scales this tuple by
the scale factor s, adds the result to tuple t1, and places the result into tuple this
(this = s*this + t1). The second scaleAdd method scales the tuple t1 by the scale
factor s, adds the result to the tuple t2, and places the result into the tuple this
(this = s*t1 + t2).
This method returns a string that contains the values of this Tuple3d. The form is
(x, y, z).
This method returns a hash number based on the data values in this object. Two
different Tuple3d objects with identical data values (that is, equals(Tuple3d)
returns true) will return the same hash number. Two tuples with different data
members may return the same hash value, although this is not likely.
This method returns true if all of the data members of Tuple3d v1 are equal to
the corresponding data members in this Tuple3d.
This method returns true if the L∞ distance between this tuple and tuple t1 is
less than or equal to the epsilon parameter. Otherwise, this method returns
false. The L∞ distance is equal to
The first absolute method sets each component of this tuple to its absolute
value. The second absolute method sets each component of this tuple to the
absolute value of the corresponding component in tuple t.
The first clamp method clamps this tuple to the range [min, max]. The second
clamp method clamps the values from tuple t to the range [min, max] and assigns
these clamped values to this tuple. The first clampMin method clamps each value
of this tuple to the min parameter. The second clampMin method clamps each
value of the tuple t and assigns these clamped values to this tuple. The first
clampMax method clamps each value of this tuple to the max parameter. The sec-
ond clampMax method clamps each value of tuple t to the max parameter and
assigns these clamped values to this tuple. In each method, the values of tuple t
remain unchanged.
The first interpolate method linearly interpolates between tuples t1 and t2 and
places the result into this tuple (this = alpha * t1 + (1 – alpha) * t2). The second
interpolate method linearly interpolates between this tuple and tuple t1 and
places the result into this tuple (this = alpha * this + (1 – alpha) * t1).
Constructors
These five constructors each return a new Point3d. The first constructor generates
a Point3d from three floating-point numbers x, y, and z. The second constructor
generates a Point3d from the first three elements of array p. The third constructor
generates a Point3d from the double-precision Point3d p1. The fourth constructor
generates a Point3d from the single-precision Point3f p1. The fifth and sixth con-
structors generate a Point3d from the tuple t1. The final constructor generates a
Point3d with the value of (0.0, 0.0, 0.0).
Methods
This method computes the L1 (Manhattan) distance between this point and point
p1. The L1 distance is equal to
This method computes the L∞ distance between this point and point p1. The L∞
distance is equal to
This method multiplies each of the x, y, and z components of the Point4d param-
eter p1 by 1/w and places the projected values into this point.
Constructors
These five constructors each return a new Vector3d. The first constructor gener-
ates a Vector3d from three floating-point numbers x, y, and z. The second con-
structor generates a Vector3d from the first three elements of array v. The third
constructor generates a Vector3d from the double-precision vector v1. The fourth
constructor generates a Vector3d from the single-precision vector v1. The fifth
and sixth constructors generate a Vector3d from the tuple t1. The final construc-
tor generates a Vector3d with the value of (0.0, 0.0, 0.0).
Methods
The cross method computes the vector cross-product of vectors v1 and v2 and
places the result in this.
The first normalize method normalizes the vector v1 to unit length and places
the result in this. The second normalize method normalizes the vector this
and places the resulting unit vector back into this.
The dot method returns the dot product of this vector and vector v1.
The lengthSquared method returns the squared length of this vector. The
length method returns the length of this vector.
This method returns the angle, in radians, between this vector and the vector v1
parameter. The return value is constrained to the range [0, π].
Variables
The component values of a Tuple3f are directly accessible through the public
variables x, y, and z. To access the x component of a Tuple3f called upperLeft-
Corner, a programmer would write upperLeftCorner.x. The programmer
would access the y and z components similarly.
public float x
public float y
public float z
Constructors
These five constructors each return a new Tuple3f. The first constructor generates
a Tuple3f from three floating-point numbers x, y, and z. The second constructor
generates a Tuple3f from the first three elements of array t. The third constructor
generates a Tuple3f from the double-precision Tuple3d t1. The fourth construc-
tor generates a Tuple3f from the single-precision Tuple3f t1. The final construc-
tor generates a Tuple3f with the value of (0.0, 0.0, 0.0).
Methods
This method returns a string that contains the values of this Tuple3f.
The four set methods set the value of vector this to the coordinates provided or
to the values of the vectors provided. The first get method gets the value of this
vector and copies the values into the array t. The second get method gets the
value of this vector and copies the values into tuple t.
The first add method computes the element-by-element sum of tuples t1 and t2,
placing the result in this. The second add method computes the element-by-ele-
ment sum of this and tuple t1 and places the result in this. The first sub
method performs an element-by-element subtraction of tuple t2 from tuple t1
and places the result in this (this = t1 – t2). The second sub method performs an
element-by-element subtraction of tuple t1 from this tuple and places the result
into this (this = this – t1).
The first negate method sets the values of this tuple to the negative of the values
from tuple t1. The second negate method negates the vector this and places the
resulting tuple back into this.
The first scale method multiplies each element of the vector t1 by the scale fac-
tor s and places the resulting scaled vector into this. The second scale method
multiples the vector this by the scale factor s and replaces this with the scaled
value. The first scaleAdd method scales this tuple by the scale factor s, adds the
result to tuple t1, and places the result into tuple this (this = s*this + t1). The
second scaleAdd method scales the tuple t1 by the scale factor s, adds the result
to the tuple t2, and places the result into the tuple this (this = s*t1 + t2).
This method returns true if all of the data members of tuple t1 are equal to the
corresponding data members in this Tuple3f.
This method returns true if the L∞ distance between this tuple and tuple t1 is
less than or equal to the epsilon parameter. Otherwise, this method returns
false. The L∞ distance is equal to
The first absolute method sets each component of this tuple to its absolute
value. The second absolute method sets each component of this tuple to the
absolute value of the corresponding component in tuple t.
The first clamp method clamps this tuple to the range [min, max]. The second
clamp method clamps the values from tuple t to the range [min, max] and assigns
these clamped values to this tuple. The first clampMin method clamps each value
of this tuple to the min parameter. The second clampMin method clamps each
value of the tuple t and assigns these clamped values to this tuple. The first
clampMax method clamps each value of this tuple to the max parameter. The sec-
ond clampMax method clamps each value of tuple t to the max parameter and
assigns these clamped values to this tuple. In each method the values of tuple t
remain unchanged.
The first method linearly interpolates between tuples t1 and t2 and places the
result into this tuple (this = alpha * t1 + (1 – alpha) * t2). The second method lin-
early interpolates between this tuple and tuple t1 and places the result into this
tuple (this = alpha * this + (1–alpha) * t1).
int hashCode()
This method returns a hash number based on the data values in this object. Two
different Tuple3f objects with identical data values (that is, equals(Tuple3f)
returns true) will return the same hash number. Two tuples with different data
members may return the same hash value, although this is not likely.
Constructors
These five constructors each return a new Point3f. The first constructor generates
a point from three floating-point numbers x, y, and z. The second constructor
(Point3f(float p[]) generates a point from the first three elements of array p.
The third constructor generates a point from the double-precision point p1. The
fourth constructor generates a point from the single-precision point p1. The fifth
and sixth constructors generate a Point3f from the tuple t1. The final constructor
generates a point with the value of (0.0, 0.0, 0.0).
Methods
The distance method computes the Euclidean distance between this point and
the point p1 and returns the result. The distanceSquared method computes the
square of the Euclidean distance between this point and the point p1 and returns
the result.
This method computes the L1 (Manhattan) distance between this point and point
p1. The L1 distance is equal to
This method computes the L∞ distance between this point and point p1. The L∞
distance is equal to
This method multiplies each of the x, y, and z components of the Point4f param-
eter p1 by 1/w and places the projected values into this point.
Constructors
These five constructors each return a new Vector3f. The first constructor gener-
ates a Vector3f from three floating-point numbers x, y, and z. The second con-
structor generates a Vector3f from the first three elements of array v. The third
constructor generates a Vector3f from the double-precision Vector3d v1. The
fourth constructor generates a Vector3f from the single-precision Vector3f v1.
The fifth and sixth constructors generate a Vector3f from the tuple t1. The final
constructor generates a Vector3f with the value of (0.0, 0.0, 0.0).
Methods
The length method computes the length of the vector this and returns its length
as a single-precision floating-point number. The lengthSquared method com-
putes the square of the length of the vector this and returns its length as a sin-
gle-precision floating-point number.
The cross method computes the vector cross-product of v1 and v2 and places
the result in this.
The dot method computes the dot product between this vector and the vector v1
and returns the resulting value.
The first normalize method normalizes the vector v1 to unit length and places
the result in this. The second normalize method normalizes the vector this
and places the resulting unit vector back into this.
This method returns the angle, in radians, between this vector and the vector
parameter. The return value is constrained to the range [0, π].
Constructors
These four constructors each return a new TexCoord3f. The first constructor gen-
erates a texture coordinate from three floating-point numbers x, y, and z. The
second constructor generates a texture coordinate from the first three elements of
array v. The third constructor generates a texture coordinate from the single-pre-
cision TexCoord3f v1. The fourth and fifth constructors generate a texture coor-
dinate from tuple t1. The final constructor generates a texture coordinate with
the value of (0.0, 0.0, 0.0).
Constructors
These four constructors each return a new Color3f. The first constructor gener-
ates a Color3f from three floating-point numbers x, y, and z. The second con-
structor (Color3f(float v[]) generates a Color3f from the first three elements
of array v. The third constructor generates a Color3f from the single-precision
color v1. The fourth and fifth constructors generate a Color3f from the tuple t1.
The final constructor generates a Color3f with the value of (0.0, 0.0, 0.0).
Variables
The component values of a Tuple4b are directly accessible through the public
variables x, y, z, and w. The x, y, z, and w values represent the red, green, blue,
and alpha values, respectively. To access the x (red) component of a Tuple4b
called backgroundColor, a programmer would write backgroundColor.x. The
programmer would access the y (green), z (blue), and w (alpha) components sim-
ilarly.
Note: Java defines a byte as a signed integer in the range [–128, 127]. However,
colors are more typically represented by values in the range [0, 255]. Java 3D rec-
ognizes this and, in those cases where Color4b is used to represent color, treats the
bytes as if the range were [0, 255].
public byte x
public byte y
public byte z
public byte w
Constructors
These four constructors each return a new Tuple4b. The first constructor gener-
ates a Tuple4b from four bytes b1, b2, b3, and b4. The second constructor
(Tuple4b(byte t[]) generates a Tuple4b from the first four elements of array t.
The third constructor generates a Tuple4b from the byte-precision Tuple4b t1.
The final constructor generates a Tuple4b with the value of (0.0, 0.0, 0.0, 0.0).
Methods
This method returns a string that contains the values of this Tuple4b.
The first set method sets the value of the data members of this Tuple4b to the
value of the array b. The second set method sets the value of the data members
of this Tuple4b to the value of the argument tuple t1. The first get method
places the values of the x, y, z, and w components of this Tuple4b into the byte
array b. The second get method places the values of the x, y, z, and w compo-
nents of this Tuple4b into the Tuple4b t1.
This method returns true if all of the data members of Tuple4b t1 are equal to
the corresponding data members in this Tuple4b.
This method returns a hash number based on the data values in this object. Two
different Tuple4b objects with identical data values (that is, equals(Tuple4b)
returns true) will return the same hash number. Two Tuple4b objects with differ-
ent data members may return the same hash value, although this is not likely.
Constructors
These four constructors each return a new Color4b. The first constructor gener-
ates a Color4b from four bytes b1, b2, b3, and b4. The second constructor gener-
ates a Color4b from the first four elements of byte array c. The third constructor
generates a Color4b from the byte-precision Color4b c1. The fourth constructor
generates a Color4b from the tuple t1. The final constructor generates a Color4b
with the value of (0.0, 0.0, 0.0, 0.0).
Variables
The component values of a Tuple4d are directly accessible through the public
variables x, y, z, and w. To access the x component of a Tuple4d called upper-
LeftCorner, a programmer would write upperLeftCorner.x. The programmer
would access the y, z, and w components similarly.
public double x
public double y
public double z
public double w
Constructors
These five constructors each return a new Tuple4d. The first constructor gener-
ates a Tuple4d from four floating-point numbers x, y, z, and w. The second con-
structor (Tuple4d(double t[]) generates a Tuple4d from the first four elements
of array t. The third constructor generates a Tuple4d from the double-precision
tuple t1. The fourth constructor generates a Tuple4d from the single-precision
tuple t1. The final constructor generates a Tuple4d with the value of (0.0, 0.0,
0.0, 0.0).
Methods
These methods set the value of the tuple this to the values specified or to the
values of the specified tuples. The first get method retrieves the value of this
tuple and places it into the array t of length four, in x, y, z, w order. The second
get method retrieves the value of this tuple and places it into tuple t.
The first add method computes the element-by-element sum of the tuple t1 and
the tuple t2, placing the result in this. The second add method computes the
element-by-element sum of this tuple and the tuple t1 and places the result in
this. The first sub method performs an element-by-element subtraction of tuple
t2 from tuple t1 and places the result in this. The second sub method performs
an element-by-element subtraction of tuple t1 from this tuple and places the
result in this.
The first negate method sets the values of this tuple to the negative of the values
from tuple t1. The second negate method negates the tuple this and places the
resulting tuple back into this.
The first scale method multiplies each element of the tuple t1 by the scale fac-
tor s and places the resulting scaled tuple into this. The second scale method
multiples the tuple this by the scale factor s and replaces this with the scaled
value. The first scaleAdd method scales this tuple by the scale factor s, adds the
result to tuple t1, and places the result into tuple this (this = s*this + t1). The
second scaleAdd method scales the tuple t1 by the scale factor s, adds the result
to the tuple t2, and places the result into the tuple this (this = s*t1 + t2).
The first interpolate method linearly interpolates between tuples t1 and t2 and
places the result into this tuple (this = alpha * t1 + (1 – alpha) * t2). The second
interpolate method linearly interpolates between this tuple and tuple t1 and
places the result into this tuple (this = alpha * this + (1 – alpha) * t1).
This method returns a string that contains the values of this tuple. The form is
(x, y, z, w).
This method returns true if all of the data members of tuple v1 are equal to the
corresponding data members in this tuple.
This method returns true if the L∞ distance between this Tuple4d and Tuple4d
t1 is less than or equal to the epsilon parameter. Otherwise, this method returns
false. The L∞ distance is equal to
The first absolute method sets each component of this tuple to its absolute
value. The second absolute method sets each component of this tuple to the
absolute value of the corresponding component in tuple t.
The first clamp method clamps this tuple to the range [min, max]. The second
clamp method clamps this tuple to the range [min, max] and places the values
into tuple t. The first clampMin method clamps the minimum value of this tuple
to the min parameter. The second clampMin method clamps the minimum value
of this tuple to the min parameter and places the values into the tuple t. The first
clampMax method clamps the maximum value of this tuple to the max parameter.
The second clampMax method clamps the maximum value of this tuple to the max
parameter and places the values into the tuple t.
This method returns a hash number based on the data values in this object. Two
different Tuple4d objects with identical data values (that is, equals(Tuple4d)
returns true) will return the same hash number. Two Tuple4d objects with differ-
ent data members may return the same hash value, although this is not likely.
Constructors
These five constructors each return a new Point4d. The first constructor generates
a Point4d from four floating-point numbers x, y, z, and w. The second constructor
(Point4d(double p[]) generates a Point4d from the first four elements of array
p. The third constructor generates a Point4d from the double-precision point p1.
The fourth constructor generates a Point4d from the single-precision point p1.
The fifth and sixth constructors generate a Point4d from tuple t1. The final con-
structor generates a Point4d with the value of (0.0, 0.0, 0.0, 0.0).
Methods
The distance method computes the Euclidean distance between this point and
the point p1 and returns the result. The distanceSquared method computes the
square of the Euclidean distance between this point and the point p1 and returns
the result.
Version 1.1 Alpha 01, February 27, 1998 325
A.1.6 Tuple4d Class MATH OBJECTS
This method computes the L1 (Manhattan) distance between this point and point
p1. The L1 distance is equal to
This method computes the L∞ distance between this point and point p1. The L∞
distance is equal to
Constructors
These five constructors each return a new Vector4d. The first constructor gener-
ates a Vector4d from four floating-point numbers x, y, z, and w. The second con-
structor generates a Vector4d from the first four elements of array v. The third
constructor generates a Vector4d from the double-precision Vector4d v1. The
fourth constructor generates a Vector4d from the single-precision Vector4f v1.
The fifth and sixth constructors generate a Vector4d from tuple t1. The final con-
structor generates a Vector4d with the value of (0.0, 0.0, 0.0, 0.0).
Methods
The length method computes the length of the vector this and returns its length
as a double-precision floating-point number. The lengthSquared method com-
putes the square of the length of the vector this and returns its length as a dou-
ble-precision floating-point number.
This method returns the dot product of this vector and vector v1.
The first normalize method normalizes the vector v1 to unit length and places
the result in this. The second normalize method normalizes the vector this
and places the resulting unit vector back into this.
This method returns the (four-space) angle, in radians, between this vector and
the vector v1 parameter. The return value is constrained to the range [0, π].
Constructors
These five constructors each return a new Quat4d. The first constructor generates
a quaternion from four floating-point numbers x, y, z, and w. The second con-
structor generates a quaternion from the first four elements of array q of length
four. The third constructor generates a quaternion from the double-precision
quaternion q1. The fourth constructor generates a quaternion from the single-pre-
cision quaternion q1. The fifth and sixth constructors generate a Quat4d from
tuple t1. The final constructor generates a quaternion with the value of (0.0, 0.0,
0.0, 0.0).
Methods
The first conjugate method sets the values of this quaternion to the conjugate of
quaternion q1. The second conjugate method negates the value of each of this
quaternion’s x, y, and z coordinates in place.
The first mul method sets the value of this quaternion to the quaternion product
of quaternions q1 and q2 (this = q1 * q2). Note that this is safe for aliasing (that
is, this can be q1 or q2). The second mul method sets the value of this quater-
nion to the quaternion products of itself and q1 (this = this * q1).
The first inverse method sets the value of this quaternion to the quaternion
inverse of quaternion q1. The second inverse method sets the value of this
quaternion to the quaternion inverse of itself.
The first normalize method sets the value of this quaternion to the normalized
value of quaternion q1. The second normalize method normalizes the value of
this quaternion in place.
These set methods set the value of this quaternion to the rotational component
of the passed matrix.
The first method performs a great circle interpolation between this quaternion
and the quaternion parameter and places the result into this quaternion. The sec-
ond method performs a great circle interpolation between quaternion q1 and
quaternion q2 and places the result into this quaternion.
Variables
The component values of a Tuple4f are directly accessible through the public
variables x, y, z, and w. To access the x component of a Tuple4f called upper-
LeftCorner, a programmer would write upperLeftCorner.x. The programmer
would access the y, z, and w components similarly.
public double x
public double y
public double z
public double w
Constructors
These five constructors each return a new Tuple4f. The first constructor generates
a Tuple4f from four floating-point numbers x, y, z, and w. The second constructor
(Tuple4f(float t[]) generates a Tuple4f from the first four elements of array
t. The third constructor generates a Tuple4f from the double-precision tuple t1.
The fourth constructor generates a Tuple4f from the single-precision tuple t1.
The final constructor generates a Tuple4f with the value of (0.0, 0.0, 0.0, 0.0).
Methods
The first set method sets the value of this tuple to the specified x, y, z, and w val-
ues. The second set method sets the value of this tuple to the specified coordi-
nates in the array. The next two methods set the value of tuple this to the value
of tuple t1. The get methods copy the value of this tuple into the tuple t.
The first add method computes the element-by-element sum of tuples t1 and t2
and places the result in this. The second add method computes the ele-
ment-by-element sum of this tuple and tuple t1 and places the result in this.
The first sub method performs the element-by-element subtraction of tuple t2
from tuple t1 and places the result in this (this = t1 – t2). The second sub
method performs the element-by-element subtraction of tuple t1 from this tuple
and places the result in this (this = this – t1).
The first negate method sets the values of this tuple to the negative of the values
from tuple t1. The second negate method negates the tuple this and places the
resulting tuple back into this.
The first scale method multiplies each element of the tuple t1 by the scale fac-
tor s and places the resulting scaled tuple into this. The second scale method
multiples the tuple this by the scale factor s, replacing this with the scaled
value. The first scaleAdd method scales this tuple by the scale factor s, adds the
result to tuple t1, and places the result into tuple this (this = s*this + t1). The
second scaleAdd method scales the tuple t1 by the scale factor s, adds the result
to the tuple t2, and places the result into the tuple this (this = s*t1 + t2).
This method returns a string that contains the values of this Tuple4f. The form is
(x, y, z, w).
This method returns true if all of the data members of Tuple4f t1 are equal to
the corresponding data members in this Tuple4f.
This method returns true if the L∞ distance between this Tuple4f and Tuple4f t1
is less than or equal to the epsilon parameter. Otherwise, this method returns
false. The L∞ distance is equal to
The first absolute method sets each component of this tuple to its absolute
value. The second absolute method sets each component of this tuple to the
absolute value of the corresponding component in tuple t.
The first clamp method clamps this tuple to the range [min, max]. The second
clamp method clamps this tuple to the range [min, max] and places the values
into tuple t. The first clampMin method clamps the minimum value of this tuple
to the min parameter. The second clampMin method clamps the minimum value
of this tuple to the min parameter and places the values into the tuple t. The first
clampMax method clamps the maximum value of this tuple to the max parameter.
The second clampMax method clamps the maximum value of this tuple to the max
parameter and places the values into the tuple t.
The first interpolate method linearly interpolates between tuples t1 and t2 and
places the result into this tuple (this = alpha * t1 + (1 – alpha) * t2). The second
interpolate method linearly interpolates between this tuple and tuple t1 and
places the result into this tuple (this = alpha * this + (1 – alpha) * t1).
This method returns a hash number based on the data values in this object. Two
different Tuple4f objects with identical data values (that is, equals(Tuple4f)
returns true) will return the same hash number. Two Tuple4f objects with differ-
ent data members may return the same hash value, although this is not likely.
Constructors
These five constructors each return a new Point4f. The first constructor generates
a Point4f from four floating-point numbers x, y, z, and w. The second constructor
(Point4f(float p[]) generates a Point4f from the first four elements of array p.
The third constructor generates a Point4f from the double-precision point p1.
The fourth constructor generates a Point4f from the single-precision point p1.
The fifth and sixth constructors generate a Point4f from tuple t1. The final con-
structor generates a Point4f with the value of (0.0, 0.0, 0.0, 0.0).
Methods
This method computes the L1 (Manhattan) distance between this point and point
p1. The L1 distance is equal to
This method computes the L∞ distance between this point and point p1. The L∞
distance is equal to
Constructors
These four constructors each return a new Color4f. The first constructor gener-
ates a Color4f from four floating-point numbers x, y, z, and w. The second con-
structor generates a Color4f from the first four elements of array c. The third
constructor generates a Color4f from the single-precision color c1. The fourth
and fifth constructors generate a Color4f from tuple t1. The final constructor
generates a Color4f with the value of (0.0, 0.0, 0.0, 0.0).
Constructors
These five constructors each return a new Vector4f. The first constructor gener-
ates a Vector4f from four floating-point numbers x, y, z, and w. The second con-
structor generates a Vector4f from the first four elements of array v. The third
constructor generates a Vector4f from the double-precision Vector4d v1. The
fourth constructor generates a Vector4f from the single-precision Vector4f v1.
The fifth and sixth constructors generate a Vector4f from tuple t1. The final con-
structor generates a Vector4f with the value of (0.0, 0.0, 0.0, 0.0).
Methods
The length method computes the length of the vector this and returns its length
as a single-precision floating-point number. The lengthSquared method com-
putes the square of the length of the vector this and returns its length as a sin-
gle-precision floating-point number.
The dot method computes the dot product between this vector and the vector v1
and returns the resulting value.
The first normalize method sets the value of this vector to the normalization of
vector v1. The second normalize method normalizes this vector in place.
This method returns the (four-space) angle, in radians, between this vector and
the vector v1 parameter. The return value is constrained to the range [0, π].
Constructors
These five constructors each return a new Quat4f. The first constructor generates
a quaternion from four floating-point numbers x, y, z, and w. The second con-
structor generates a quaternion from the four floating-point numbers of array q of
length four. The third constructor generates a quaternion from the double-preci-
sion quaternion q1. The fourth constructor generates a quaternion from the sin-
gle-precision quaternion q1. The fifth and sixth constructors generate a
quaternion from tuple t1. The final constructor generates a quaternion with the
value of (0.0, 0.0, 0.0, 0.0).
Methods
The first conjugate method sets the value of this quaternion to the conjugate of
quaternion q1. The second conjugate method sets the value of this quaternion to
the conjugate of itself.
The first mul method sets the value of this quaternion to the quaternion product
of quaternions q1 and q2 (this = q1 * q2). Note that this is safe for aliasing (that
is, this can be q1 or q2). The second mul method sets the value of this quater-
nion to the quaternion product of itself and q1 (this = this * q1).
The first inverse method sets the value of this quaternion to the quaternion
inverse of quaternion q1. The second inverse method sets the value of this
quaternion to the quaternion inverse of itself.
The first normalize method sets the value of this quaternion to the normalized
value of quaternion q1. The second normalize method normalizes the value of
this quaternion in place.
These set methods set the value of this quaternion to the rotational component
of the passed matrix.
The first method performs a great circle interpolation between this quaternion
and quaternion q1 and places the result into this quaternion. The second method
performs a great circle interpolation between quaternion q1 and quaternion q2
and places the result into this quaternion.
Variables
The component values of an AxisAngle4d are directly accessible through the
public variables x, y, z, and angle. To access the x component of an
AxisAngle4d called myRotation, a programmer would write myRotation.x. The
programmer would access the y, z, and angle components similarly.
public double x
public double y
public double z
public double angle
The x, y, and z coordinates and the rotational angle, respectively. The rotation
angle is expressed in radians.
Constructors
These five constructors each return a new AxisAngle4d. The first constructor
generates an axis-angle from four floating-point numbers x, y, z, and angle. The
second constructor generates an axis-angle from the first four elements of array
a. The third constructor generates an axis-angle from the double-precision
axis-angle a1. The fourth constructor generates an axis-angle from the sin-
gle-precision axis-angle a1. The final constructor generates an axis-angle with
the value of (0.0, 0.0, 0.0, 0.0).
Methods
The first set method sets the value of this axis-angle to the specified x, y, z, and
angle coordinates. The second set method sets the value of this axis-angle to
the specified x,y,z angle. The next four set methods set the value of this
axis-angle to the rotational component of the passed matrix m1. The next two set
methods set the value of this axis-angle to the value of axis-angle a1. The last
two set methods set the value of this axis-angle to the value of the passed
quaternion q1. The get method retrieves the value of this axis-angle and places it
into the array a of length four in x,y,z,angle order.
This method returns a string that contains the values of this AxisAngle4d. The
form is (x, y, z, angle).
This method returns true if all of the data members of AxisAngle4d v1 are
equal to the corresponding data members in this axis-angle.
This method returns true if the L∞ distance between this axis-angle and
axis-angle a1 is less than or equal to the epsilon parameter. Otherwise, this
method returns false. The L∞ distance is equal to
This method returns a hash number based on the data values in this object. Two
different AxisAngle4d objects with identical data values (that is,
equals(AxisAngle4d) returns true) will return the same hash number. Two
AxisAngle4d objects with different data members may return the same hash
value, although this is not likely.
Variables
The component values of an AxisAngle4f are directly accessible through the
public variables x, y, z, and angle. To access the x component of an
AxisAngle4f called myRotation, a programmer would write myRotation.x. The
programmer would access the y, z, and angle components similarly.
public float x
public float y
public float z
public float angle
The x, y, and z coordinates and the rotational angle, respectively. The rotation
angle is expressed in radians.
Constructors
These five constructors each return a new AxisAngle4f. The first constructor
generates an axis-angle from four floating-point numbers x, y, z, and angle. The
second constructor generates an axis-angle from the first four elements of array
a. The third constructor generates an axis-angle from the single-precision
axis-angle a1. The fourth constructor generates an axis-angle from the dou-
ble-precision axis-angle a1. The final constructor generates an axis-angle with
the value of (0.0, 0.0, 0.0, 0.0).
Methods
The first set method sets the value of this axis-angle to the specified x, y, z, and
angle coordinates. The second set method sets the value of this axis-angle to
the specified coordinates in the array a. The next four set methods set the value
of this axis-angle to the rotational component of the passed matrix m1. The next
two set methods set the value of this axis-angle to the value of axis-angle a1.
The last two set methods set the value of this axis-angle to the value of the
passed quaternion q1. The get method retrieves the value of this axis-angle and
places it into the array a of length four in x,y,z,angle order.
This method returns a string that contains the values of this axis-angle. The form
is (x, y, z, angle).
This method returns true if all of the data members of axis-angle a1 are equal to
the corresponding data members in this axis-angle.
This method returns true if the L∞ distance between this axis-angle and
axis-angle a1 is less than or equal to the epsilon parameter. Otherwise, this
method returns false. The L∞ distance is equal to
This method returns a hash number based on the data values in this object. Two
different AxisAngle4f objects with identical data values (that is,
equals(AxisAngle4f) returns true) will return the same hash number. Two
AxisAngle4f objects with different data members may return the same hash
value, although this is not likely.
Constructors
These eight constructors each return a new GVector. The first constructor gener-
ates a generalized mathematical vector with all elements set to 0.0: length rep-
resents the number of elements in the vector. The second and third constructors
generate a generalized mathematical vector and copy the initial value from the
parameter vector. The next four constructors generate a generalized mathemati-
cal vector and copy the initial value from the tuple parameter tuple. The final
method generates a generalized mathematical vector by copying length ele-
ments from the array parameter. The parameter length must be less than or
equal to vector.length.
Methods
The first add method computes the element-by-element sum of this GVector and
GVector v1 and places the result in this. The second add method computes the
element-by-element sum of GVectors v1 and v2 and places the result in this.
The first sub method performs the element-by-element subtraction of GVector v1
from this GVector and places the result in this (this = this – v1). The second
sub method performs the element-by-element subtraction of GVector v2 from
GVector v1 and places the result in this (this = v1 – v2).
The first mul method multiplies matrix m1 times vector v1 and places the result
into this vector (this = m1 * v1). The second mul method multiplies the transpose
of vector v1 (that is, v1 becomes a row vector with respect to the multiplication)
times matrix m1 and places the result into this vector (this = transpose(v1) * m1).
The result is technically a row vector, but the GVector class only knows about
column vectors, so the result is stored as a column vector.
This method negates the vector this and places the resulting vector back into
this.
This method changes the size of this vector dynamically. If the size is increased,
no data values are lost. If the size is decreased, only those data values whose vec-
tor positions were eliminated are lost.
The first set method sets the values of this vector to the values found in the array
v: The array should be at least equal in length to the number of elements in the
vector. The second set method sets the values of this vector to the values in vec-
tor v. The last five set methods set the value of this vector to the values in tuple
t.
These methods set and retrieve the specified index value of this vector.
The norm method returns the square root of the sum of the squares of this vector
(its length in n-dimensional space). The normSquared method returns the sum of
the squares of this vector (its length in n-dimensional space).
The first normalize method sets the value of this vector to the normalization of
vector v1. The second normalize method normalizes this vector in place.
The first scale method sets the value of this vector to the scalar multiplication of
the scale factor s with the vector v1. The second scale method scales this vector
by the scale factor s. The scaleAdd method scales the vector v1 by the scale fac-
tor s, adds the result to the vector v2, and places the result into this vector
(this = s*v1 + v2).
This method returns a string that contains the values of this vector.
This method returns a hash number based on the data values in this object. Two
different GVector objects with identical data values (that is, equals(GVector)
returns true) will return the same hash number. Two objects with different data
members may return the same hash value, although this is not likely.
This method returns true if all of the data members of GVector vector1 are
equal to the corresponding data members in this GVector.
This method returns true if the L∞ distance between this vector and vector v1 is
less than or equal to the epsilon parameter. Otherwise, this method returns
false. The L∞ distance is equal to
This method returns the dot product of this vector and vector v1.
This method returns the (n-space) angle, in radians, between this vector and the
vector v1 parameter . The return value is constrained to the range [0, π].
The first method linearly interpolates between vectors v1 and v2 and places the
result into this vector (this = alpha * v1 + (1 – alpha) * v2). The second method
linearly interpolates between this vector and vector v1 and places the result into
this vector (this = alpha * this + (1 – alpha) * v1).
Variables
The component values of a Matrix3f are directly accessible through the public
variables m00, m01, m02, m10, m11, m12, m20, m21, and m22. To access the element
Constructors
These constructors each return a new Matrix3f object. The first constructor gen-
erates a 3 × 3 matrix from the nine values provided. The second constructor gen-
erates a 3 × 3 matrix from the first nine values in the array v. The third and fourth
constructors generate a new matrix with the same values as the passed matrix m1.
The final constructor generates a 3 × 3 matrix with all nine values set to 0.0.
Methods
These two set methods set the value of the matrix this to the matrix conversion
of the quaternion argument q1.
These two set methods set the value of the matrix this to the matrix conversion
of the axis and angle argument a1.
The first method sets the value of this matrix to a scale matrix with the passed
scale amount. The second method sets the values of this matrix to the
row-major array parameter (that is, the first three elements of the array are cop-
ied into the first row of this matrix, and so forth).
The setElement and getElement methods provide a means for accessing a sin-
gle element within a 3 × 3 matrix using indices. This is not a preferred method of
access, but Java 3D provides these methods for functional completeness. The
setElement method takes a row index row (where a value of 0 represents the
first row and a value of 2 represents the third row), a column index column
(where a value of 0 represents the first column and a value of 2 represents the
third column), and a value. It sets the corresponding element in matrix this to
the specified value. The getElement method also takes a row index row and a
column index column. It returns the element at the corresponding locations as a
floating-point value.
The first add method adds the matrix m1 to the matrix m2 and places the result
into the matrix this. The second add method adds the matrix this to the matrix
m1 and places the result into the matrix this. The first sub method performs an
element-by-element subtraction of matrix m2 from matrix m1 and places the result
into the matrix this. The second sub method performs an element-by-element
subtraction of the matrix m1 from the matrix this and places the result into the
matrix this.
The first method multiplies this matrix by the tuple t and places the result back
into the tuple (t = this*t). The second method multiplies this matrix by the tuple
t and places the result into the tuple result (result = this*t).
The first method transposes this matrix in place. The second method sets the
value of this matrix to the transpose of the matrix m1.
The first method inverts this matrix in place. The second method sets the value of
this matrix to the inverse of the matrix m1.
The determinant method computes the determinant of the matrix this and
returns the computed value.
The three rot methods construct rotation matrices that rotate in a clockwise
direction around the axis specified as the last letter of the method name. The con-
structed matrix replaces the value of the matrix this. The rotation angle is
expressed in radians.
The first mul method multiplies matrix m1 with matrix m2 and places the result
into the matrix this. The second mul method multiplies the matrix this with the
matrix m1 and places the result into matrix this.
The first mulNormalize method multiplies this matrix by matrix m1, performs an
SVD normalization of the result, and places the result back into this matrix (this
= SVDnorm(this ⋅ m1)). The second mulNormalize method multiplies matrix m1
by matrix m2, performs an SVD normalization of the result, and places the result
into this matrix (this = SVDnorm(m1 ⋅ m2)).
The equals method returns true if all of the data members of Matrix3f m1 are
equal to the corresponding data members in this Matrix3f.
This method returns true if the L∞ distance between this Matrix3f and Matrix3f
m1 is less than or equal to the epsilon parameter. Otherwise, this method
returns false. The L∞ distance is equal to
MAX[i = 0,1,2, … n; j = 0,1,2,… n; abs(this.m(i,j) – m1.m(i,j)]
The first method negates the value of this matrix in place (this = –this). The sec-
ond method sets the value of this matrix equal to the negation of the matrix m1
(this = –m1).
This method sets the scale component of the current matrix by factoring out the
current scale (by doing an SVD) and multiplying by the new scale.
This method adds a scalar to each component of the matrix m1 and places the
result into this. Matrix m1 is not modified.
This method multiplies each component of the matrix m1 by a scalar and places
the result into this. Matrix m1 is not modified.
The first method multiplies this matrix by the tuple t and places the result back
into the tuple (t = this*t). The second method multiplies this matrix by the
tuple t and places the result into the tuple result (result = this*t).
The hashCode method returns a hash number based on the data values in this
object. Two different Matrix3f objects with identical data values (that is,
equals(Matrix3f) returns true) will return the same hash number. Two
Matrix3f objects with different data members may return the same hash value,
although this is not likely.
The toString method returns a string that contains the values of this Matrix3f.
Variables
The component values of a Matrix3d are directly accessible through the public
variables m00, m01, m02, m10, m11, m12, m20, m21, and m22. To access the element
in row 2 and column 0 of the matrix named rotate, a programmer would write
rotate.m20. Other matrix values are accessed similarly.
Constructors
These constructors each return a new Matrix3d object. The first constructor gen-
erates a 3 × 3 matrix from the nine values provided. The second constructor gen-
erates a 3 × 3 matrix from the first nine values in the array v. The third
constructor generates a 3 × 3 matrix with all nine values set to 0.0. The fourth
and fifth constructors generate a 3 × 3 matrix with the same values as the matrix
m1 parameter.
Methods
This method sets the value of the matrix this to the float value of the rotational
components of the passed matrix m1.
These methods set the value of the matrix this to a scale matrix with the passed
scale amount.
These two set methods set the value of the matrix this to the matrix conversion
of the axis and angle argument a1.
These two set methods set the value of the matrix this to the matrix conversion
of the quaternion argument q1.
The setElement and getElement methods provide a means for accessing a sin-
gle element within a 3 × 3 matrix using indices. This is not a preferred method of
access, but Java 3D provides these methods for functional completeness. The
setElement method takes a row index row (where a value of 0 represents the
first row and a value of 2 represents the third row), a column index column
(where a value of 0 represents the first column and a value of 2 represents the
third column), and a value. It sets the corresponding element in matrix this to
the specified value. The getElement method also takes a row index row and a
column index column and returns the element at the corresponding locations as a
floating-point value.
The first add method adds the matrix m1 to the matrix m2 and places the result
into the matrix this. The second add method adds the matrix this to the matrix
m1 and places the result into the matrix this. The first sub method performs an
element-by-element subtraction of matrix m2 from matrix m1 and places the result
into the matrix this. The second sub method performs an element-by-element
subtraction of the matrix m1 from the matrix this and places the result into the
matrix this.
This method adds a scalar to each component of the matrix m1 and places the
result into this. Matrix m1 is not modified.
The first method multiplies this matrix by the tuple t and places the result back
into the tuple (t = this*t). The second method multiplies this matrix by the tuple
t and places the result into the tuple result (result = this*t).
The first method transposes this matrix in place. The second method sets the
value of this matrix to the transpose of the matrix m1.
The first method inverts this matrix in place. The second method sets the value of
this matrix to the inverse of the matrix m1.
The determinant method computes the determinant of the matrix this and
returns the computed value.
The three rot methods construct rotation matrices that rotate in a clockwise
direction around the axis specified by the final letter of the method name. The
constructed matrix replaces the value of the matrix this. The rotation angle is
expressed in radians.
The first mul method multiplies matrix m1 with matrix m2 and places the result
into the matrix this. The second mul method multiplies matrix this with matrix
m1 and places the result into the matrix this.
The first mulNormalize method multiplies this matrix by matrix m1, performs an
SVD normalization of the result, and places the result back into this matrix (this
= SVDnorm(this ⋅ m1)). The second mulNormalize method multiplies matrix m1
by matrix m2, performs an SVD normalization of the result, and places the result
into this matrix (this = SVDnorm(m1 ⋅ m2)).
The equals method returns true if all of the data members of Matrix3d m1 are
equal to the corresponding data members in this Matrix3d.
This method returns true if the L∞ distance between this Matrix3d and Matrix3d
m1 is less than or equal to the epsilon parameter. Otherwise, this method returns
false. The L∞ distance is equal to
The first method negates the value of this matrix in place (this = –this). The sec-
ond method sets the value of this matrix equal to the negation of the matrix m1
(this = –m1).
This method sets the scale component of the current matrix by factoring out the
current scale (by doing an SVD) and multiplying by the new scale.
This method multiplies each component of the matrix m1 by a scalar and places
the result into this. Matrix m1 is not modified.
The first method multiplies this matrix by the tuple t and places the result back
into the tuple (t = this*t). The second method multiplies this matrix by the
tuple t and places the result into the tuple result (result = this*t).
The hashCode method returns a hash number based on the data values in this
object. Two different Matrix3d objects with identical data values (that is,
equals(Matrix3d) returns true) will return the same hash number. Two
Matrix3d objects with different data members may return the same hash value,
although this is not likely.
The toString method returns a string that contains the values of this Matrix3d.
Variables
The component values of a Matrix4f are directly accessible through the public
variables m00, m01, m02, m03, m10, m11, m12, m13, m20, m21, m22, m23, m30, m31,
m32, and m33. To access the element in row 2 and column 0 of matrix rotate, a
programmer would write rotate.m20. A programmer would access the other
values similarly.
Constructors
These constructors each return a new Matrix4f object. The first constructor gen-
erates a 4 × 4 matrix from the 16 values provided. The second constructor gener-
ates a 4 × 4 matrix from the first 16 values in the array v. The third constructor
generates a 4 × 4 matrix from the quaternion, translation, and scale values. The
scale is applied only to the rotational components of the matrix (upper 3 × 3) and
not to the translational components. The fourth and fifth constructors generate a
4 × 4 matrix with the same values as the passed matrix m1. The sixth constructor
generates a 4 × 4 matrix from the rotation matrix, translation, and scale values.
The scale is applied only to the rotational components of the matrix (upper 3 × 3)
and not to the translational components of the matrix. The final constructor gen-
erates a 4 × 4 matrix with all 16 values set to 0.0.
Methods
The first two set methods set the value of this matrix to the matrix conversion of
the quaternion argument q1. The next two set methods set the value of this
matrix from the rotation expressed by the quaternion q1, the translation t1, and
the scale s. The next two set methods set the value of this matrix to a copy of
the passed matrix m1. The last two set methods set the value of this matrix to the
matrix conversion of the axis and angle argument a1.
These methods set the rotational component (upper 3 × 3) of this matrix to the
matrix values in the m1 argument. The other elements of this matrix are initial-
ized as if this were an identity matrix (that is, an affine matrix with no transla-
tional component).
The first method sets the value of this matrix to a scale matrix with the passed
scale amount. The second method sets the value of this matrix to the row-major
array parameter (that is, the first four elements of the array are copied into the
first row of this matrix, and so forth).
This method sets the value of this matrix to a translation matrix with the passed
translation value.
These methods set the value of this matrix to a scale and translation matrix. In
the first method, the scale is not applied to the translation, and all of the matrix
values are modified. In the second method, the translation is scaled by the scale
factor, and all of the matrix values are modified.
These two methods set the value of this matrix from the rotation expressed by
the rotation matrix m1, the translation t1, and the scale scale. The translation is
not modified by the scale.
The first two methods perform an SVD normalization of this matrix in order to
acquire the normalized rotational component. The values are placed into the
matrix parameter m1. The third method performs an SVD normalization of this
matrix to calculate the rotation as a 3 × 3 matrix, the translation, and the scale.
None of the matrix values in this matrix are modified. The fourth method per-
forms an SVD normalization of this matrix to acquire the normalized rotational
component. The values are placed into the quaternion q1. The final method
retrieves the translational components of this matrix and copies them into the
vector trans.
The setElement and getElement methods provide a means for accessing a sin-
gle element within a 4 × 4 matrix using indices. This is not a preferred method of
access, but Java 3D provides these methods for functional completeness. The
setElement method takes a row index row (where a value of 0 represents the
first row and a value of 3 represents the fourth row), a column index column
(where a value of 0 represents the first column and a value of 3 represents the
fourth column), and a value. It sets the corresponding element in matrix this to
the specified value. The getElement method also takes a row index row and a
column index column and returns the element at the corresponding locations as a
floating-point value.
This method retrieves the upper 3 × 3 values of this matrix and places them into
the matrix m1.
The first method sets the scale component of the current matrix by factoring out
the current scale (by doing an SVD) and multiplying by the new scale. The sec-
ond method performs an SVD normalization of this matrix to calculate and
return the uniform scale factor.
This method adds a scalar to each component of the matrix m1 and places the
result into this. Matrix m1 is not modified.
This method multiplies each component of the matrix m1 by a scalar and places
the result into this. Matrix m1 is not modified.
These methods set the rotational component (upper 3 × 3) of this matrix to the
matrix values in the passed argument. The other elements of this matrix are
unchanged. In the first two methods, a singular value decomposition is per-
formed on this object’s upper 3 × 3 matrix to factor out the scale, then this
object’s upper 3 × 3 matrix components are replaced by the passed rotation com-
ponents, and finally the scale is reapplied to the rotational components. In the
next two methods, a singular value decomposition is performed on this object’s
upper 3 × 3 matrix to factor out the scale, then this object’s upper 3 × 3 matrix
components are replaced by the matrix equivalent of the quaternion, and finally
the scale is reapplied to the rotational components. In the last method, a singular
value decomposition is performed on this object’s upper 3 × 3 matrix to factor
out the scale, then this object’s upper 3 × 3 matrix components are replaced by
the matrix equivalent of the axis-angle, and finally the scale is reapplied to the
rotational components.
This method replaces the upper 3 × 3 matrix values of this matrix with the values
in the matrix m1.
This method modifies the translational components of this matrix to the values of
the vector trans. The other values of this matrix are not modified.
The first add method adds the matrix m1 to the matrix m2 and places the result
into the matrix this. The second add method adds the matrix this to the matrix
m1 and places the result into the matrix this. The first sub method performs an
element-by-element subtraction of matrix m2 from matrix m1 and places the result
into the matrix this. The second sub method performs an element-by-element
subtraction of the matrix m1 from the matrix this and places the result into the
matrix this.
The first transpose method transposes the matrix m1 and places the result into
the matrix this. The second transpose method transposes the matrix this and
places the result back into the matrix this.
The first transform method postmultiplies this matrix by the Point3f point and
places the result back into point. The multiplication treats the three-element
point as if its fourth element were 1. The second transform method postmulti-
plies this matrix by the Point3f point and places the result into pointOut.
The first transform method postmultiplies this matrix by the Vector3f normal
and places the result back into normal. The multiplication treats the three-ele-
ment vector as if its fourth element were 0. The second transform method post-
multiplies this matrix by the Vector3f normal and places the result into
normalOut.
The first transform method postmultiplies this matrix by the tuple vec and
places the result back into vec. The second transform method postmultiplies
this matrix by the tuple vec and places the result into vecOut.
The first method negates the value of this matrix in place (this = –this). The sec-
ond method sets the value of this matrix equal to the negation of the matrix m1
(this = –m1).
The first method inverts this matrix in place. The second method sets the value of
this matrix to the inverse of the matrix m1.
The determinant method computes the determinant of the matrix this and
returns the computed value.
The three rot methods construct rotation matrices that rotate in a clockwise
direction around the axis specified as the last letter of the method name. The con-
structed matrix replaces the value of the matrix this. The rotation angle is
expressed in radians.
The first mul method multiplies matrix m1 with matrix m2 and places the result
into the matrix this. The second mul method multiplies the matrix this with
matrix m1 and places the result in matrix this.
The equals method returns true if all of the data members of Matrix4f m1 are
equal to the corresponding data members in this Matrix4f.
This method returns true if the L∞ distance between this Matrix4f and Matrix4f
m1 is less than or equal to the epsilon parameter. Otherwise, this method returns
false. The L∞ distance is equal to
The hashCode method returns a hash number based on the data values in this
object. Two different Matrix4f objects with identical data values (that is,
equals(Matrix4f) returns true) will return the same hash number. Two
Matrix4f objects with different data members may return the same hash value,
although this is not likely.
The toString method returns a string that contains the values of this Matrix4f.
Variables
The component values of a Matrix4d are directly accessible through the public
variables m00, m01, m02, m03, m10, m11, m12, m13, m20, m21, m22, m23, m30, m31,
m32, and m33. To access the element in row 2 and column 0 of matrix rotate, a
programmer would write rotate.m20. A programmer would access the other
values similarly.
Constructors
These constructors each return a new Matrix4d object. The first constructor gen-
erates a 4 × 4 matrix from the 16 values provided. The second constructor gener-
ates a 4 × 4 matrix from the first 16 values in the array v. The third through sixth
constructors generate a 4 × 4 matrix from the quaternion, translation, and scale
values. The scale is applied only to the rotational components of the matrix
(upper 3 × 3) and not to the translational components. The seventh and eighth
constructors generate a 4 × 4 matrix with the same values as the passed matrix.
The final constructor generates a 4 × 4 matrix with all 16 values set to 0.0.
Methods
The first two methods perform an SVD normalization of this matrix in order to
acquire the normalized rotational component. The values are placed into the
passed parameter. The next two methods perform an SVD normalization of this
matrix to calculate the rotation as a 3 × 3 matrix, the translation, and the scale.
None of the matrix values are modified. The next two methods perform an SVD
normalization of this matrix to acquire the normalized rotational component. The
last two methods retrieve the translational components of this matrix.
The setElement and getElement methods provide a means for accessing a sin-
gle element within a 4 × 4 matrix using indices. This is not a preferred method of
access, but Java 3D provides these methods for functional completeness. The
setElement method takes a row index row (where a value of 0 represents the
first row and a value of 3 represents the fourth row), a column index column
(where a value of 0 represents the first column and a value of 3 represents the
fourth column), and a value. It sets the corresponding element in matrix this to
the specified value. The getElement method also takes a row index row and a
column index column and returns the element at the corresponding locations as a
floating-point value.
v to update the matrix. The third setRow method uses the first four values in the
array v to update the matrix. In all three cases the matrix affected is the matrix
this. The two getRow methods copy the matrix values in the specified row into
the array or vector parameter, respectively.
These methods set the rotational component (upper 3 × 3) of this matrix to the
matrix values in the passed argument. The other elements of this matrix are
unchanged. A singular value decomposition is performed on this object’s upper
3 × 3 matrix to factor out the scale, then this object’s upper 3 × 3 matrix compo-
nents are replaced by the passed rotation components, and finally the scale is
reapplied to the rotational components.
These methods set the rotational component (upper 3 × 3) of this matrix to the
matrix values in the passed argument. The other elements of this matrix are
unchanged. A singular value decomposition is performed on this object’s upper
3 × 3 matrix to factor out the scale, then this object’s upper 3 × 3 matrix compo-
nents are replaced by the matrix equivalent of the quaternion, and finally the
scale is reapplied to the rotational components.
This method sets the rotational component (upper 3 × 3) of this matrix to the
equivalent values in the passed argument. The other elements of this matrix are
unchanged. A singular value decomposition is performed on this object’s upper
3 × 3 matrix to factor out the scale, then this object’s upper 3 × 3 matrix compo-
nents are replaced by the matrix equivalent of the axis-angle, and finally the scale
is reapplied to the rotational components.
The two get methods retrieve the upper 3 × 3 values of this matrix and place
them into the matrix m1. The two set methods replace the upper 3 × 3 matrix
values of this matrix with the values in the matrix m1.
This method modifies the translational components of this matrix to the values of
the Vector3d argument. The other values of this matrix are not modified.
The first method sets the scale component of the current matrix by factoring out
the current scale (by doing an SVD) and multiplying by the new scale. The sec-
ond method performs an SVD normalization of this matrix to calculate and
return the uniform scale factor.
This method adds a scalar to each component of the matrix m1 and places the
result into this. Matrix m1 is not modified.
This method multiplies each component of the matrix m1 by a scalar and places
the result into this. Matrix m1 is not modified.
The first add method adds the matrix m1 to the matrix m2 and places the result
into the matrix this. The second add method adds the matrix this to the matrix
m1 and places the result into the matrix this. The first sub method performs an
element-by-element subtraction of matrix m2 from matrix m1 and places the result
into the matrix this. The second sub method performs an element-by-element
subtraction of the matrix m1 from the matrix this and places the result into the
matrix this.
This method sets the value of this matrix to the row-major array parameter (that
is, the first four elements of the array will be copied into the first row of this
matrix, and so forth).
These methods set the rotational component (upper 3 × 3) of this matrix to the
matrix values in the matrix argument. The other elements of this matrix are ini-
tialized as if this were an identity matrix (that is, an affine matrix with no trans-
lational component).
These methods set the value of this matrix to the value of the passed matrix m1.
These methods set the value of this matrix to the matrix conversion of the quater-
nion argument.
These methods set the value of this matrix to the matrix conversion of the axis
and angle argument.
This method sets the value of this matrix to a translation matrix by the passed
translation value.
These methods set the value of this matrix to the rotation expressed by the
quaternion q1, the translation t1, and the scale s.
This method sets the value of this matrix to a scale matrix with the passed scale
amount.
This method sets the value of this matrix to a scale and translation matrix. The
scale is not applied to the translation, and all of the matrix values are modified.
This method sets the value of this matrix to a scale and translation matrix. The
translation is scaled by the scale factor, and all of the matrix values are modified.
These methods set the value of this matrix from the rotation expressed by the
rotation matrix m1, the translation t1, and the scale s.
The first method sets the value of this matrix to the negation of the m1 parameter.
The second method negates the value of this matrix (this = –this).
The first transpose method transposes the matrix m and places the result into the
matrix this. The second transpose method transposes the matrix this and
places the result back into the matrix this.
The first two transform methods postmultiply this matrix by the tuple vec and
place the result back into vec. The last two transform methods postmultiply this
matrix by the tuple vec and place the result into vecOut.
The first two transform methods postmultiply this matrix by the point argument
point and place the result back into point. The multiplication treats the
three-element point as if its fourth element were 1. The last two transform
methods postmultiply this matrix by the point argument point and place the
result into pointOut.
The first two transform methods postmultiply this matrix by the vector argu-
ment normal and place the result back into normal. The multiplication treats the
three-element vector as if its fourth element were 0. The last two transform
methods postmultiply this matrix by the vector argument normal and place the
result into normalOut.
The first method inverts this matrix in place. The second method sets the value of
this matrix to the inverse of the matrix m1.
The determinant method computes the determinant of the matrix this and
returns the computed value.
The rot methods construct rotation matrices that rotate in a clockwise direction
around the axis specified as the last letter of the method name. The constructed
matrix replaces the value of the matrix this. The rotation angle is expressed in
radians.
The first mul method multiplies matrix m1 with matrix m2 and places the result
into the matrix this. The second mul method multiplies matrix this with matrix
m1 and places the result into the matrix this.
The equals method returns true if all of the data members of Matrix4d m1 are
equal to the corresponding data members in this Matrix4d.
This method returns true if the L∞ distance between this Matrix4d and Matrix4d
m1 is less than or equal to the epsilon parameter. Otherwise, this method returns
false. The L∞ distance is equal to
The hashCode method returns a hash number based on the data values in this
object. Two different Matrix4d objects with identical data values (that is,
equals(Matrix4d) returns true) will return the same hash number. Two
Matrix4d objects with different data members may return the same hash value,
although this is not likely.
The toString method returns a string that contains the values of this Matrix4d.
Constructors
These constructors each return a new GMatrix. The first constructor generates an
nRow by nCol identity matrix. The second constructor generates an nRow by nCol
matrix initialized to the values in the array matrix. The last constructor gener-
ates a new GMatrix and copies the initial values from the parameter matrix
argument.
Methods
The first mul method multiplies matrix m1 with matrix m2 and places the result
into this. The second mul method multiplies this matrix with matrix m1 and
places the result into this.
The first add method adds this matrix to matrix m1 and places the result back into
this. The second add method adds matrices m1 and m2 and places the result into
this. The first sub method subtracts matrix m1 from the matrix this and places
the result into this. The second sub method subtracts matrix m2 from matrix m1
and places the result into the matrix this.
The first method negates the value of this matrix in place (this = –this). The sec-
ond method sets the value of this matrix to the negation of the matrix m1 (this =
–m1).
The first method inverts this matrix in place. The second method sets the value of
this matrix to the inverse of the matrix m1.
This method subtracts this matrix from the identity matrix and puts the values
back into this (this = I – this).
This method copies a submatrix derived from this matrix into the target matrix.
The rowSource and colSource parameters define the upper left of the submatrix.
The numRow and numCol parameters define the number of rows and columns in
the submatrix. The submatrix is copied into the target matrix starting at (rowD-
est, colDest). The target parameter is the matrix into which the submatrix will
be copied.
This method changes the size of this matrix dynamically. If the size is increased,
no data values will be lost. If the size is decreased, only those data values whose
matrix positions were eliminated will be lost.
The first set method sets the values of this matrix to the values found in the
matrix array parameter. The values are copied in one row at a time, in
row-major fashion. The array should be at least equal in length to the number of
matrix rows times the number of matrix columns in this matrix. The second set
method sets the values of this matrix to the values found in matrix m1. The last
four set methods set the values of this matrix to the values found in matrix m1.
The first two methods place the values in the upper 3 × 3 of this matrix into the
matrix m1. The next two methods place the values in the upper 4 × 4 of this
matrix into the matrix m1. The final method places the values in this matrix into
the matrix m1. Matrix m1 should be at least as large as this matrix.
The getNumRow method returns the number of rows in this matrix. The getNum-
Col method returns the number of columns in this matrix.
These methods set and retrieve the value at the specified row and column of this
matrix.
The setRow methods copy the values from the array into the specified row of this
matrix. The getRow methods place the values of the specified row into the array
or vertex. The setColumn methods copy the values from the array into the spec-
ified column of this matrix or vector. The getColumn methods place the values of
the specified column into the array or vector.
This method sets this matrix to a uniform scale matrix, and all of the values are
reset.
The first transpose method transposes this matrix in place. The second trans-
pose method places the matrix values of the transpose of matrix m1 into this
matrix.
This method returns a string that contains the values of this GMatrix.
This method returns a hash number based on the data values in this object. Two
different GMatrix objects with identical data values (that is, equals(GMatrix)
Version 1.1 Alpha 01, February 27, 1998 379
A.2.5 GMatrix Class MATH OBJECTS
returns true) will return the same hash number. Two objects with different data
members may return the same hash value, although this is not likely.
This method returns true if all of the data members of GMatrix m1 are equal to
the corresponding data members in this GMatrix.
This method returns true if the L∞ distance between this GMatrix and GMatrix
m1 is less than or equal to the epsilon parameter. Otherwise, this method returns
false. The L∞ distance is equal to
The SVD method finds the singular value decomposition (SVD) of this matrix
such that this = U * W * VT, and returns the rank of this matrix. The values of
U, W, and V are all overwritten. Note that the matrix V is output as V and not VT.
If this matrix is m × n, then U is m × m, W is a diagonal matrix that is m × n, and
V is n × n. The inverse of this matrix is this–1 = V * W–1 * UT, where W–1 is a
diagonal matrix computed by taking the reciprocal of each of the diagonal ele-
ments of matrix W.
B.1 Compression
First, the geometry to be compressed is converted into a generalized mesh form,
which allows a triangle to be, on average, specified by 0.80 vertices.
Next the data for each vertex component of the geometry is converted to the
most efficient representation format for its type and then quantized to as few bits
as possible.
These quantized bits are differenced between successive vertices, and the results
are modified Huffman encoded into self-describing variable-bit-length data ele-
ments.
Finally, these variable-length elements are strung together using Java 3D’s eight
geometry commands into a final compressed geometry block.
B.2 Decompression
Upon receipt, compressed geometry blocks are decompressed into the local
host’s preferred geometry format by reversing the above process.
A stack of the last three vertices used to form a triangle is kept. The three verti-
ces are labeled oldest, middle, and newest. An incoming vertex of type replace_
oldest causes the oldest vertex to be replaced by the middle, the middle to be
replaced by the newest, and the incoming vertex to become the newest. This cor-
responds to a PHIGS PLUS triangle strip (sometimes called a “zig-zag” strip).
The replacement type replace_middle leaves the oldest vertex unchanged,
replaces the middle vertex by the newest, and the incoming vertex becomes the
newest. This corresponds to a triangle star or fan.
The replacement type restart marks the oldest and middle vertices as invalid,
and the incoming vertex becomes the newest. Generalized triangle strips must
always start with this code. A triangle will be output only when a replacement
operation results in three valid vertices.
Restart corresponds to a “move” operation in polylines, and allows multiple
unconnected variable-length triangle strips to be described by a single data struc-
ture passed in by the user, reducing the overhead. The generalized triangle strip’s
ability to effectively change from “strip” to “star” mode in the middle of a strip
allows more complex geometry to be represented compactly, and requires less
input data bandwidth. The restart capability allows several pieces of discon-
nected geometry to be passed as one data block. Figure B-1 shows a single gen-
eralized triangle strip and the associated replacement codes.
Triangles are normalized such that the front face is always defined by a clock-
wise vertex order after transformation. To support this, there are two flavors of
restart: restart_clockwise and restart_counterclockwise. The vertex order
is reversed after every replace_oldest, but remains the same after every
replace_middle.
Vertex Codes 2 4 6
1 Restart
2 RO
3 RO
4 RO 1 3 5
5 RO Triangle Strip
6 RO
7 Restart 9 10
8 RO
9 RO
10 RM 8 7 11
11 RM 14
12 RM
Triangle Star
13 RM
13 12
14 RM
15 Restart 16
16 RO
Independent
17 RO Triangle
18 Restart
19 RO 15 17
20 RO
21 RO 19 21
22 Restart
23 RO Independent
24 RO Quad
25 RO 18 20
26 RO
27 RO 23 25 27
28 RO 28
29 RM
26
30 RM
31 RM 22 24 29
32 RM 32
33 RO
30
31
RO = Replace Oldest 33
RM = Replace Middle Mixed Strip
However, by confining itself to linear strips, the generalized triangle strip format
leaves a potential factor of two (in space) on the table. Consider the geometry in
Figure B-2.
While it can be represented by one triangle strip, many of the interior vertices
appear twice in the strip. This is inherent in any approach wishing to avoid refer-
ences to old data. Some systems have tried using a simple regular mesh buffer to
support reuse of old vertices, but there is a problem with this approach in prac-
tice: In general, geometry does not come in a perfectly regular rectangular mesh
structure.
1 4 5
2 3
Start
6 8 11
9 10
7
13 16
15
14 17
12
21
18 19 20 22 24
23
27 28 30
25 26 29
We still assume that the position and scale of the local modeling spaces are spec-
ified by full 32-bit or 64-bit floating-point coordinates. If sufficient numerical
care is taken, multiple such modeling spaces can be stitched together without
cracks, forming seamless geometry coordinate systems with much greater than
16-bit positional precision.
Most geometry is local, so within the 16-bit (or less) modeling space (of each
object), the delta difference between one vertex and the next in the generalized
mesh buffer stream is very likely to be less than 16 bits in significance. Indeed
one can histogram the bit length of neighboring position deltas in a batch of
geometry and, based on this histogram, assign a variable-length code to com-
pactly represent the vertices. The typical coding used in many other similar situ-
ations is customized Huffman code; this is the case for geometry compression.
The details of the coding of position deltas will be postponed until later, where
they can be discussed in the context of color and normal delta coding as well.
exists for large numbers of points, many near-optimal distributions exist. Thus in
theory one of these with the same sort of 48-way symmetry described above
could be used for the decompression look-up table. However, several additional
constraints mandate a different choice of encoding:
• We desire a scalable density distribution in which zeroing more and more
of the low-order address bits to the table still results in fairly even density
of normals on the unit sphere. Otherwise a different look-up table for every
encoding density would be required.
• We desire a delta-encodable distribution. Statistically, adjacent vertices in
geometry will have normals that are nearby on the surface of the unit
sphere. Nearby locations on the 2D space of the unit-sphere surface are
most succinctly encoded by a 2D offset. We desire a distribution where
such a metric exists.
• Finally, while the computational cost of the normal encoding process is not
too important, in general, distributions with lower encoding costs are pre-
ferred.
For all these reasons, we decided to use a regular grid in the angular space within
one sextant as our distribution. Thus, rather than a monolithic 11-bit index, all
normals within a sextant are much more conveniently represented as two 6-bit
orthogonal angular addresses, revising our grand total to 18 bits. Just as for posi-
tions and colors, if more quantization of normals is acceptable, then these 6-bit
indices can be reduced to fewer bits, and thus absolute normals can be repre-
sented using anywhere from 18 to as few as 6 bits. But as will be seen, we can
delta-encode this space, further reducing the number of bits required for high-
quality representation of normals.
These two equations show how values of θ̂ n and φ̂ n can be converted to spherical
coordinates θ and φ, which in turn can be converted to rectilinear normal coordi-
nate components via equation B.1.
To reverse the process, for example, to encode a given normal n into θ̂ n and φ̂ n ,
one cannot just invert equation B.2. Instead, the n must first be folded into the
canonical octant and sextant, resulting in n'. Then n' must be dotted with all
quantized normals in the sextant. For a fixed n, the values of θ̂ n and φ̂ n that
result in the largest (nearest unity) dot product define the proper encoding of n.
Now the complete bit format of absolute normals can be given. The uppermost
three bits specify the octant, the next three bits the sextant, and finally two n-bit
fields specify θ̂ n and φ̂ n . The three-bit sextant field takes on one of six values,
the binary codes for which are shown in Figure B-3.
Y
y>z x<y
y<z x>y
101 010
X
100 000
Z
x<z x=z x>z
This discussion has ignored some details. In particular, the three normals at the
corners of the canonical patch are multiply represented (6, 8, and 12 times). By
employing the two unused values of the sextant field, these normals can be
uniquely encoded as special normals. The normal subcommand describes the
special encoding used for two of these corner cases (14 total special normals).
This representation of normals is amenable to delta encoding, at least within a
sextant. (With some additional work, this can be extended to sextants that share a
common edge.) The delta code between two normals is simply the difference in
ˆ n.
θ̂ n and φ̂ n : Δθ̂ n and Δφ
their representation. Thus we made the decision to use a single field-length tag to
indicate the bit length of ΔX, ΔY, and ΔZ.
This also means that we cannot take advantage of another Huffman technique
that saves somewhat less than one more bit per component, but our bit savings by
not having to specify two additional tag fields (for ΔY and ΔZ) outweigh this. A
single tag field also means that a hardware decompression engine can decom-
press all three fields in parallel, if desired.
Similar arguments hold for deltas of RGBα values, and so here also a single
field-length tag indicates the bit-length of the ΔR, ΔG, ΔB, and Δα (if present)
fields.
Both absolute and delta normals are also parameterized by a single value (n),
which can be specified by a single tag.
We chose to limit the length of the Huffman tag field to the relatively small value
of six bits. This was done to facilitate high-speed, low-cost hardware implemen-
tations. (A 64-entry tag look-up table allows decoding of tags in one clock
cycle.) Three such tables exist: one each for positions, normals, and colors. The
tables contain the length of the tag field, the length of the data field(s), a data
normalization coefficient, and an absolute/relative bit.
One additional complication was required to enable reasonable hardware imple-
mentations. As will be seen in a later section, all instructions are broken up into
an eight-bit header and a variable-length body. Sufficient information is present
in the header to determine the length of the body. But to give the hardware time
to process the header information, the header of one instruction must be placed
in the stream before the body of the previous instruction. Thus the sequence …
B0 H1B1 H2B2 H3 … has to be encoded as follows:
… H1 B0 H2 B1 H3 B2 …
vertex
push (mbp) bit, and may optionally specify a normal and/or a color (or texture
map coordinate). The presence of normal or color data within a vertex com-
mand is controlled by two state bits known as the bundling bits: bnv and bcv,
respectively.
normal, color
There are also two stand-alone commands for specifying normals and colors:
normal and color. These commands may be freely interspersed with vertex
commands, and semantically have (nearly) the same effect as normals or colors
bundled directly with a normal.
Once a color or normal value is specified, either directly or bundled with a ver-
tex command, that color or normal will remain in effect as the current color or
normal until a new value is specified. In this fashion, for example, a constant
material color may be specified to apply to a forthcoming sequence of non-color-
bundled vertices.
setState
The setState command updates the value of the three state bits. Two of these
bits are the normal and color bundling bits; the other one will be described later.
meshBufferReference
setTable
The setTable command allows a range of entries in one of the three Huffman
decompression tables all to be set to the same new value.
passthrough
NOP
The variable length no-operation NOP command allows the compression bit
stream to be padded by a specified number of bits. This allows portions of the
compression data to be 32-bit aligned when desired.
B.12.1 NOP
The variable length no-operation (NOP) command has an 8-bit opcode, a 5-bit
count field, and a 0- to 31-bit field of zeros. The total length of the variable-
length no-operation command is between 13 and 44 bits.
The variable-length NOP command’s primary use is to align geometry decompres-
sion commands to word boundaries, when desired. This is useful if one wishes to
“patch” a decompression instruction in the middle of a stream without having to
bit-align the patch.
B.12.2 setState
b b c
0 0 0 1 1 0 0 n c a
v v p
The setState command has a 7-bit opcode, 3 bits of state to be set, and a spare,
for a total length of 11 bits. The first and second state bits indicate if normals
and/or colors will be bundled with vertex commands, respectively. The third
state bit indicates if colors will contain an alpha value, in addition to the standard
RGB. The final state bit is unused, and reserved for future use.
vertex
m
0 1 Position bits 0 – 5 rep b Position bits 6 – n Normal bits Color bits
p
normal
1 1 Normal bits 0 – 5 Normal bits 6 – n
color
1 0 Color bits 0 – 5 Color bits 6 – n
meshBufferReference
r r
0 0 1 Index e e
p p
setState
b b c
n c a
0 0 0 1 1 0 0 p
0
v v
setTable
0 0 0 1 0 Table Range Entry
Reserved (unused)
0 0 0 0 1
NOP
0 0 0 0 0 0 0 1 Bit Count 0s
Position: Tag ΔX ΔY ΔZ
^ ^
Δθt Δφt
Normal: Tag
(or absolute index)
Color: Tag ΔR ΔG ΔB Δα
B.12.3 setTable
The setTable command has a 5-bit op code, a 2-bit table field, a 7-bit address/
range field, a 4-bit data length field, an absolute/relative bit, and a 4-bit up-shift
field. The total instruction length is fixed at 23 bits. The table and address/range
fields specify which decompression table entries to update; the remaining fields
comprise the values to which to update the table entries.
The two-bit table specifies for which of the three decompression tables this
update is targeted:
00 Position
01 Color
10 Normal
11 Unused—reserved for future use
The seven-bit address/range field specifies which entries in the specified table are
to be set to the values in the following fields.
Address/Range Semantics Implicit Tag
Length
1a5a4a3a2a1a0 set table entry a5a4a3a2a1a0 6
01a5a4a3a2a1 set table entry a5a4a3a2a10 through a5a4a3a2a11 5
001a5a4a3a2 set table entry a5a4a3a200 through a5a4a3a211 4
0001a5a4a3 set table entry a5a4a3000 through a5a4a3111 3
00001a5a4 set table entry a5a40000 through a5a41111 2
000001a5 set table entry a500000 through a511111 1
0000001 set table entry 000000 through 111111 0
The idea is that table settings are made in aligned power-of-two ranges. The
position of the first ‘1’ bit in the address/range field indicates how many entries
are to be consecutively set; the remaining bits after the first ‘1’ are the upper
address bits of the base of the table entries to be set. This also sets the length of
the “tag” that this entry defines as equal to the number of address bits (if any)
after the first ‘1’ bit.
The data length specifies how large the delta values to be associated with this tag
are; a data length of 12 implies that the upper 4 bits are to be sign extensions of
the incoming delta value. Note that the data length describes not the length of the
delta value coming in, but the final position of the delta value for reconstruction.
In other words, the data length field is the sum of the actual delta bits to be read
in plus the up-shift amount. For the position and color tables, the data length val-
ues of 1 to 15 correspond to lengths of 1 to 15, but the data length value of 0
encodes an actual length of 16, as a length of 0 makes no sense for positions and
colors. For normals, a length of 0 is sometimes appropriate, and the maximum
length needed is only 7. Thus for normals, the values 0 to 7 map through 0 to 7,
and 8 to 15 are not used.
The up-shift value is the number of bits that the delta values described by these
tags will be shifted up before being added to the current value.
The absolute/relative flag indicates whether this table entry describes values that
are to be interpreted as an absolute reference or a relative delta. Note that for
normals, absolute references will have an additional six leading bits describing
the absolute octant and sextant.
B.12.4 meshBufferReference
r r
0 0 1 Index e e
p p
There is no mesh buffer re-push bit; mesh buffer contents may be referenced
multiple times until 16 newer vertices have been pushed; if a vertex is still
needed it must be resent.
For clarity, because it is by far the most typical case, the coordinate bit-fields are
labeled ΔR ΔG ΔB (Δα), though more properly they are R, G, and B fields; their
actual interpretation is absolute or relative depending on the setting of that bit in
the decompression table entry corresponding to the tag field. In both cases the
fields are signed two’s-complement numbers.
If the most recent setting of the cap bit by a setState command is zero, then no
fourth (alpha) field will be expected, and must not be present. If the cap bit was
set, then the alpha field will be processed and must be present.
The rest of the graphics pipeline and frame buffer following the geometry
decompression stage may choose not to use all (up to) 16 bits of color compo-
nent information; in this case it is acceptable to truncate the trailing bits during
decompression. What the geometry decompression format does require is that
color setting of any size up to 16 bits be supported, even if all the bits are not
used.
0–6 4
normal: (special) Tag 11 Special
As usual, the first six bits of the subcommand are actually forwarded ahead of
the rest of the command. Depending on the length of the tag and delta fields, the
first six bits might only contain the tag, or the tag and some of the other field
bits, or any subset up to the entire subcommand, if short enough.
A normal subcommand is interpreted as relative or absolute depending on the
current setting of that bit in the decompression table entry corresponding to the
tag field. Unlike the position and color subcommands, the number of fields of
a normal command differ between the absolute and relative types.
When the subcommand is relative, there are two delta angle fields after the tag
field, both of the same length, up to seven bits. These two fields are signed two’s-
complement numbers. If after delta addition the resulting angle is outside the
current sextant or octant, the sextant/octant wrapping rules (described elsewhere)
apply.
When the subcommand is absolute, four bit fields follow the tag. The first is a
three-bit (fixed-length) absolute sextant field, indicating in which of six sextants
of an octant of the unit sphere this normal resides. The second field is also fixed
at three bits, and indicates in which octant of the unit sphere the normal resides.
The last two fields are absolute angles within the sextant, and are unsigned posi-
tive numbers, up to six bits in length.
Fourteen special absolute normals are encoded by the unused two settings within
the three sextant bits. This is indicated by specifying the angle fields to have a
length of zero (not present), the first two bits of the sextant field to both have a
value of 1, and the trailing bit after the octant field to have a value of 0.
Table B-1 lists the 14 special normals
Table B-1 The 14 Special Normals
Special NX NY NZ Comment
0000 1.0 0.0 0.0 +X axis
0010 –1.0 0.0 0.0 –X axis
0100 0.0 1.0 0.0 +Y axis
0110 0.0 –1.0 0.0 –Y axis
1000 0.0 0.0 1.0 +Z axis
1010 0.0 0.0 –1.0 –Z axis
0001 1⁄ 3 1⁄ 3 1⁄ 3 +X +Y +Z
0011 1⁄ 3 1⁄ 3 1⁄ 3 +X +Y –Z
0101 1⁄ 3 1⁄ 3 1⁄ 3 +X –Y +Z
The rest of the graphics pipeline and frame buffer following the geometry
decompression stage may choose not to use all (up to) 16 bits of normal compo-
nent information; in this case it is acceptable to truncate the trailing bits during
decompression. What the geometry decompression format does require is that
normal settings of any size up to 18-bit absolute normals be supported, even if all
the decompressed bits are not used.
B.12.8 vertex
m
b
0 1 Position bits 0-5 rep
p
Position bits 6–n Normal bits Color bits
The mesh buffer push bit indicates whether this vertex should be pushed into the
mesh buffer so as to be eligible for later re-reference.
The position, normal, and color subcommands have the semantics docu-
mented in their individual sections.
B.12.9 normal
B.12.10 color
The color command has a two-bit opcode, and a color subcommand. The
color subcommand semantics are documented in Section B.12.6, “Color Sub-
command.”
If a color command is present immediately before a meshBufferReference
command, then the new color value overrides the color data present in the mesh
buffer for that particular mesh buffer reference.
First the fixed-length eight- (or six-) bit header for the next full command (or
subcommand) to be processed is detached from the current head of the com-
pressed stream. Next, the variable-length body bits for the previous command (or
subcommand) are detached from the compressed stream and combined with the
already extracted header for the previous command; the previous command is
now complete and can be processed. Now the fixed-length header for the com-
mand after the next is detached from the bit stream, and then finally the variable-
length body for the next full command can be detached; the next command is
now complete and can be processed.
One slight complexity: the get_8_bits() only extracts six bits of header for the
color or normal subcommand of a vertex command. It extracts a full eight bits
of header in all other cases.
absolute_position(x, y, z):
cur_x ← x, cur_y ← y, cur_z ← z
absolute_color(r, g, b {, α}):
cur_r ← r, cur_g ← g, cur_b ← b, {cur_α ← α }
relative_normal(Δu, Δv):
flip_u[6] = { 4, 5, 3, 2, 0, 1 }
flip_v[6] = { 2, 4, 1, 1, 2, 4 }
flip_uv[6] = { 2, 3, 0, 1, 5, 4 }
nx ← norms[v,u].nx, ny ← norms[v,u].ny, nz ←
norms[v,u].nz,
if (cur_sex & 4) t ← nx, nx ← nz, nz ← t
if (cur_sex & 2) t ← ny, ny ← nz, nz ← t
if (cur_sex & 1) t ← nx, nx ← ny, ny ← t
if (cur_oct & 1) nz ← -nz
The contents of the norms[] table is exactly specified, and the next revision of
this specification will contain an exact listing of the values.
normal(n):
current_normal ← n, normal_override ← 1
color(c):
current_color ← c, color_override ← 1
current_position ← p,
if (bnv) current_normal ← n,
if (bcv) current_color ← c,
output_vertex(rep, current_position, current_normal, current_
color)
if (mbp) mesh_buffer[oldest_mesh_index].position ← p
if (mbp && bnv) mesh_buffer[mesh_index].normal ← n
if (mbp && bcv) mesh_buffer[mesh_index].color ← c
if (mbp) mesh_index ← (mesh_index+1) & 15
normal_override ← 0, color_override ← 0
current_position ←
mesh_buffer[(mesh_index - i - 1) & 15].position
if (bnv && !normal_override)
current_normal ←mesh_buffer[(mesh_index - i - 1) & 15].normal
if (bcv && !color_override)
current_color ← mesh_buffer[(mesh_index - i - 1) & 15].color
output_vertex(rep, current_position, current_normal, current_
color)
bnv ← new_bnv,
bcv ← new_bcv,
cap ← new_cap,
tex ← new_tex
passthrough(data):
(null)
vnop(length):
(null)
output_vertex(replace_middle, newv):
if (number_of_vertices < 2)
midlest ← newest, newest ← newv, number_of_vertices++
else if (number_of_vertices < 3)
oldest ← midlest, midlest ← newest, newest ← newv,
number_of_vertices++,
intermediate_triangle(ccw, oldest, midlest, newest)
else if (number_of_vertices == 3)
midlest ← newest, newest ← newv,
intermediate_triangle(ccw, oldest, midlest, newest)
output_vertex(replace_oldest, newv):
if (number_of_vertices < 2)
midlest ← newest, newest ← newv, number_of_vertices++
else if (number_of_vertices < 3)
oldest ← midlest, midlest ← newest, newest ← newv,
number_of_vertices++,
intermediate_triangle(ccw, oldest, midlest, newest)
else if (number_of_vertices == 3)
oldest ← midlest, midlest ← newest, newest ← newv,
ccw = 1 - ccw,
intermediate_triangle(ccw, oldest, midlest, newest)
if (ccw)
final_triangle(v1.position, v1.normal, v1.color,
v2.position, v2.normal, v2.color,
v3.position, v3.normal, v3.color)
else if (!ccw)
final_triangle(v2.position, v2.normal, v2.color,
v1.position, v1.normal, v1.color,
v3.position, v3.normal, v3.color)
B.15.3 Position
B.15.4 Normals
oct = 0;
if(nx < 0.0) oct |= 4, nx = -nx
if(ny < 0.0) oct |= 2, ny = -ny
if(nz < 0.0) oct |= 1, nz = -nz
sex = 0;
if (nx < ny) t = nx, nx = ny, ny = t, sex |= 1
if (nz < ny) t = ny, ny = nz, nz = t, sex |= 2
if (nx < nz) t = nx, nx = nz, nz = t, sex |= 4
result is the new quantized normal representation (along with the octant and sex-
tant representation).
B.15.5 Colors
The colors are assumed to be in a 0.0 to 1.0 representation to begin with.
nitude. Compute the number of bits for this component, including one sign bit.
This is the length to be histogrammed for this pair of normals.
If the normals have different sextants and/or octants, check to see if their sextants
share an edge. Depending on what type of edge they share, the delta including
the change in edges is encoded in one of three ways: U + ΔU < 0, V + ΔV < 0,
and U + ΔU + V + ΔV > 64. Each case is discussed in the paragraphs below. The
sextant numbers are from the binary codes shown in Figure B-3.
Sextants 0 and 4, 1 and 5, and 2 and 3 share the U = 0 edge. When crossing this
boundary, ΔU becomes ~U – last_u. This will generate a negative cur_u value
during decompression, which causes the decompressor to invert cur_u and look
up the new sextant in a table.
Sextants 0 and 2, 1 and 3, and 4 and 5 share the U + V = 64 edge. ΔU becomes
64 – U – last_u and ΔV becomes 64 – V – last_v. When cur_u + cur_v > 64,
the decompressor sets cur_u = 64 – cur_u and cur_v = 64 – cur_v, and a table
lookup determines the new sextant.
Each sextant shares the V = 0 edge with its corresponding sextant in another
octant. When in sextants 1 or 5, the normal moves across the X-axis, across the
Y-axis for sextants 0 or 4, and across the Z-axis for sextants 2 or 3. ΔV becomes
~V – last_v. The decompressor inverts a negative cur_v and performs a table
lookup for a mask to exclusive-OR with the current octant value.
Otherwise the normals cannot be delta encoded, and so the second (target) nor-
mal must be represented by an absolute reference to its three octant, three sex-
tant, and 2 N-bit U V addresses. This is the length to be histogrammed for this
pair of normals.
Codes generated with a length greater than six, the maximum code length, must
be shortened. These nodes are merged with more frequent nodes by increasing
the number of sign bits included with the smaller data length.
head position and orientation contributes little to a camera model’s camera posi-
tion and orientation; however, it does affect the projection matrix.
From a camera-based perspective, the application developer must construct the
camera’s position and orientation by combining the virtual-world component (the
position and orientation of the magic carpet) and the physical-world component
(the user’s instantaneous head position and orientation).
Java 3D’s view model incorporates the appropriate abstractions to compensate
automatically for such variability in end-user hardware environments.
LCC
Image Plate Coexistence
RCC Virtual
ViewPlatform Vworld
Fishtank Mode
Head
The Left Image Plate and Right Image Plate Coordinate Systems
The left image plate and right image plate coordinate systems correspond with
the physical coordinate system of the image generator associated with the left
and right eye, respectively. The image plate is defined as having its origin at the
lower left-hand corner of the display area and lying in the display area’s XY
plane. Note that the left image plate’s XY plane does not necessarily lie parallel
to the right image plate’s XY plane. Note that left image plate and right image
plate are different coordinate systems than the room-mounted display environ-
ment’s image plate coordinate system.
Methods
These methods set and retrieve a flag specifying whether to enable the use of six-
degrees-of-freedom tracking hardware.
These methods set and retrieve a flag that specifies whether or not to repeatedly
generate the user-head-to-vworld transform (initially false).
This method returns a string that contains the values of this View object.
Methods
These two methods set and retrieve the current policy for view computation. The
policy variable specifies how Java 3D uses its transforms in computing new
viewpoints, as follows:
Version 1.1 Alpha 01, February 27, 1998 421
C.5.2 Screen Scale Policy VIEW MODEL DETAILS
These methods set and retrieve the current screen scale policy.
These methods set and retrieve the screen scale value. This value is used when
the view attach policy is NOMINAL_SCREEN_SCALED and the screen scale policy is
SCALE_EXPLICIT.
Constants
This variable tells Java 3D that it should modify the eyepoint position so it is
located at the appropriate place relative to the window to match the specified
field of view. This implies that the view frustum will change whenever the appli-
cation changes the field of view. In this mode, the eye position is read-only. This
is the default setting.
This variable tells Java 3D to interpret the eye’s position relative to the entire
screen. No matter where an end user moves a window (a Canvas3D), Java 3D
continues to interpret the eye’s position relative to the screen. This implies that
the view frustum changes shape whenever an end user moves the location of a
window on the screen. In this mode, the field of view is read-only.
This variable specifies that Java 3D should interpret the eye’s position informa-
tion relative to the window (Canvas3D). No matter where an end user moves a
window (a Canvas3D), Java 3D continues to interpret the eye’s position relative
to that window. This implies that the frustum remains the same no matter where
the end user moves the window on the screen. In this mode, the field of view is
read-only.
Methods
This variable specifies how Java 3D handles the predefined eyepoint in a non-
head-tracked application. The variable can contain one of three values:
RELATIVE_TO_FIELD_OF_VIEW, RELATIVE_TO_SCREEN, or RELATIVE_TO_WINDOW.
The default value is RELATIVE_TO_FIELD_OF_VIEW.
Constants
These constants specify the monoscopic view policy. The first constant specifies
that the monoscopic view should be the view as seen from the left eye. The sec-
ond constant specifies that the monoscopic view should be the view as seen from
the right eye. The third constant specifies that the monoscopic view should be the
view as seen from the “center eye,” the fictional eye half-way between the left
and right eyes. This is the default setting.
Methods
These methods set and return the monoscopic view policy, respectively.
The first method takes the sensor’s last reading and generates a sensor-to-vworld
coordinate system transform. This Transform3D object takes points in that sen-
sor’s local coordinate system and transforms them into virtual world coordinates.
The next two methods retrieve the specified sensor’s last hotspot location in vir-
tual world coordinates.
TransformGroup TG
Physical Physical
Body Environment
TransformGroup TG
Canvas3D Screen3D
Canvas3D Screen3D
Physical Physical
Body Environment
Measured Parameters
These calibration parameters are set once, typically by a browser, calibration pro-
gram, system administrator, or system calibrator, not by an applet.
These methods store the screen’s (image plate’s) physical width and height in
meters. The system administrator or system calibrator must provide these values
by measuring the display’s active image width and height. In the case of a head-
mounted display, this should be the display’s apparent width and height at the
focal plane.
This method returns a status flag indicating whether scene antialiasing is avail-
able.
These values determine eye placement when a head tracker is not in use and the
application is directly controlling the eye position in image plate coordinates. In
head-tracked mode or when the windowEyepointPolicy is RELATIVE_TO_
FIELD_OF_VIEW, this value is derived from other values and is read-only. In head-
tracked mode, Java 3D repetitively generates these values as a function of the
current head position. The center eye is the fictional eye half-way between the
left and right eye.
This method computes the position of the specified AWT pixel value in image
plate coordinates and copies that value into the object provided.
These methods set and retrieve the position of the manual left and right eyes in
image plate coordinates. These values determine eye placement when a head
tracker is not in use and the application is directly controlling the eye position in
image plate coordinates. In head-tracked mode or when the windowEyepoint-
Policy is RELATIVE_TO_FIELD_OF_VIEW, this value is ignored. When the win-
dowEyepointPolicy is RELATIVE_TO_WINDOW, only the Z value is used.
These methods retrieve the physical width and height of this canvas window, in
meters.
Constructors
public PhysicalBody()
Constructs a default user PhysicalBody object with the following default eye and
ear positions:
Left eye: –0.033, 0.0, 0.0
These methods construct a PhysicalBody object with the specified eye and ear
positions.
Methods
These methods set and retrieve the position of the center of rotation of a user’s
left and right eyes in head coordinates.
These methods set and retrieve the position of the user’s left and right ear posi-
tions in head coordinates.
These methods set and retrieve the user’s nominal eye height as measured from
the ground to the center eye in the default posture. In a standard computer moni-
tor environment, the default posture would be seated. In a multiple-projection
display room environment or a head-tracked environment, the default posture
would be standing.
These methods set and retrieve the offset from the center eye to the center of the
display screen. This offset distance allows an “over the shoulder” view of the
scene as seen by the end user.
These methods set and retrieve the head-to-head-tracker coordinate system trans-
form. If head tracking is enabled, this transform is a calibration constant. If head
tracking is not enabled, this transform is not used. This transform is used in both
SCREEN_VIEW and HMD_VIEW modes.
This method returns a string that contains the values of this PhysicalBody object.
Constructors
public PhysicalEnvironment()
public PhysicalEnvironment(int sensorCount)
and methods that set and initialize the device driver and output playback associ-
ated with the audio device.
Methods
The PhysicalEnvironment object specifies the following methods pertaining to
audio output devices and input sensors.
This method selects the specified AudioDevice object as the device through
which audio rendering for this PhysicalEnvironment will be performed.
These methods set and retrieve the count of the number of sensors stored within
the PhysicalEnvironment object. It defaults to a small number of sensors. It
should be set to the number of sensors available in the end-user’s environment
before initializing the Java 3D API.
This method returns a status flag indicating whether or not tracking is available.
The first method sets the sensor specified by the index to the sensor provided.
The second method retrieves the specified sensor.
These methods set and retrieve the index of the dominant hand.
These methods set and retrieve the index of the nondominant hand.
These methods set and retrieve the index of the head, right hand, and left hand.
The index parameter refers to the sensor index.
These methods set and retrieve the physical coexistence policy used in this phys-
ical environment. This policy specifies how Java 3D will place the user’s eye-
point as a function of current head position during the calibration process.
Java 3D permits one of three values: NOMINAL_HEAD, NOMINAL_FEET, or NOMI-
NAL_SCREEN. Note: NOMINAL_SCREEN_SCALED is not allowed for this policy.
sponding virtual eye’s position relative to the ViewPlatform’s origin. Each eye’s
position also serves to specify that eye’s frustum since the eye’s position relative
to a Screen3D uniquely specifies that eye’s view frustum. Note that Java 3D will
access the PhysicalBody object to obtain information describing the user’s inter-
pupilary distance and tracking hardware, values it needs to compute the end-
user’s eye positions from the head position information.
Methods
Note: Use of these view-compatibility functions will disable some of Java 3D’s
view model features and limit the portability of Java 3D programs. These methods
are primarily intended to help jump-start porting of existing applications.
View Frustum
The location of the near and far clipping planes allow the application program-
mer to specify which objects Java 3D should not draw. Objects too far away from
the current eyepoint usually do not result in interesting images. Those too close
to the eyepoint might obscure the interesting objects. By carefully specifying
near and far clipping planes, an application programmer can control which
objects the renderer will not be drawing.
From the perspective of the display device, the virtual camera’s image plane cor-
responds to the display screen. The camera’s placement, orientation, and field of
view determine the shape of the view frustum.
This is a utility method that specifies the position and orientation of a viewing
transform. It works very similarly to the equivalent function in OpenGL. The
inverse of this transform can be used to control the ViewPlatform object within
the scene graph. Alternatively, this transform can be passed directly to the View’s
VpcToEc transform via the compatibility-mode viewing functions (see
Section C.11.2.3, “Setting the Viewing Transform”).
The frustum method establishes a perspective projection with the eye at the apex
of a symmetric view frustum. The transform maps points from eye coordinates to
clipping coordinates. The clipping coordinates generated by the resulting trans-
form are in a right-handed coordinate system (as are all other coordinate systems
in Java 3D).
The arguments define the frustum and its associated perspective projection:
(left, bottom, -near) and (right, top, -near) specify the point on the near
clipping plane that maps onto the lower-left and upper-right corners of the win-
dow, respectively. The -far parameter specifies the far clipping plane. See
Figure C-8.
The perspective method establishes a perspective projection with the eye at the
apex of a symmetric view frustum, centered about the Z-axis, with a fixed field of
view. The resulting perspective projection transform mimics a standard camera-
based view model. The transform maps points from eye coordinates to clipping
coordinates. The clipping coordinates generated by the resulting transform are in
a right-handed coordinate system.
The arguments define the frustum and its associated perspective projection:
-near and -far specify the near and far clipping planes; fovx specifies the field
of view in the X dimension, in radians; and aspect specifies the aspect ratio of
the window. See Figure C-9.
top
left
bottom
near right
far
aspect = x/y
y
Θ x
fovx
zNear
zFar
top
left
bottom
View Volume
near far
These compatibility-mode methods specify a viewing frustum for the left and
right eye that transforms points in eye coordinates to clipping coordinates. If
compatibility mode is disabled, a RestrictedAccessException is thrown. In
monoscopic mode, only the left eye projection matrix is used.
T HE Java 3D API uses the standard Java exception model for handling errors
or exceptional conditions. In addition to using existing exception classes, such as
ArrayIndexOutOfBoundsException and IllegalArgumentException, Java 3D
defines several new runtime exceptions. These exceptions are thrown by various
Java 3D methods or by the Java 3D renderer to indicate an error condition of
some kind.
The exceptions defined by Java 3D, as part of the javax.media.j3d package, are
described in the following sections. They all extend RuntimeException and, as
such, need not be declared in the throws clause of methods that might cause the
exception to be thrown. This appendix is not an exhaustive list of all exceptions
expected for Java 3D. Additional exceptions will be added as the need arises.
D.1 BadTransformException
Indicates an attempt to use a Tranform3D object that is inappropriate for the
object in which it is being used. For example:
• Transforms that are used in the scene graph, within a TransformGroup
node, must be affine. They may optionally contain a nonuniform scale or a
shear, subject to other listed restrictions.
• All transforms in the TransformGroup nodes above a ViewPlatform object
must be congruent. This ensures that the Vworld-coordinates-to-
ViewPlatform-coordinates transform is angle- and length-preserving with
no shear and only uniform scale.
• Most viewing transforms other than those in the scene graph can only con-
tain translation and rotation.
Constructors
public BadTransformException()
public BadTransformException(String str)
These create the exception object that outputs the exception message. The first
form uses the default message. The second form specifies the message string to
be output.
D.2 CapabilityNotSetException
This exception indicates an access to a live or compiled Scene Graph object
without the required capability set.
Constructors
public CapabilityNotSetException()
public CapabilityNotSetException(String str)
These create the exception object that outputs the exception message. The first
form uses the default message. The second form specifies the message string to
be output.
D.3 DanglingReferenceException
This exception indicates that during a cloneTree call, an updated reference was
requested for a node that did not get cloned. This occurs when a subgraph is
duplicated via cloneTree and has at least one leaf node that contains a reference
to a node with no corresponding node in the cloned subgraph. This results in two
leaf nodes wanting to share access to the same node.
If dangling references are to be allowed during the cloneTree call, cloneTree
should be called with the allowDanglingReferences parameter set to true.
Constructors
public DanglingReferenceException()
public DanglingReferenceException(String str)
These create the exception object that outputs the exception message. The first
form uses the default message. The second form specifies the message string to
be output.
D.4 IllegalRenderingStateException
This exception indicates an illegal state for rendering. This includes:
• Lighting without specifying normals in a geometry array object
• Texturing without specifying texture coordinates in a geometry array ob-
ject
public illegalRenderingStateException()
public illegalRenderingStateException(String str)
These create the exception object that outputs the exception message. The first
form uses the default message. The second form specifies the message string to
be output.
D.5 IllegalSharingException
This exception indicates an illegal attempt to share a scene graph object. For
example, the following are illegal:
• Referencing a shared subgraph in more than one virtual universe
• Using the same component object both in the scene graph and in an imme-
diate-mode graphics context
• Including an unsupported type of leaf node within a shared subgraph
• Referencing a BranchGroup node in more than one of the following ways:
• Attaching it to a (single) Locale
• Adding it as a child of a Group node within the scene graph
• Referencing it from a (single) Background leaf node as background geometry
Constructors
public IllegalSharingException()
public IllegalSharingException(String str)
These create the exception object that outputs the exception message. The first
form uses the default message. The second form specifies the message string to
be output.
D.6 MismatchedSizeException
This exception indicates that an operation cannot be completed properly because
of a mismatch in the sizes of the object attributes.
public MismatchedSizeException()
public MismatchedSizeException(String str)
These create the exception object that outputs the exception message. The first
form uses the default message. The second form specifies the message string to
be output.
D.7 MultipleParentException
This exception extends IllegalSharingException and indicates an attempt to
add a node that is already a child of one group node into another group node.
Constructors
public MultipleParentException()
public MultipleParentException(String str)
These create the exception object that outputs the exception message. The first
form uses the default message. The second form specifies the message string to
be output.
D.8 RestrictedAccessException
This exception indicates an attempt to access or modify a state variable without
permission to do so. For example, invoking a set method for a state variable that
is currently read-only.
Constructors
public RestrictedAccessException()
public RestrictedAccessException(String str)
These create the exception object that outputs the exception message. The first
form uses the default message. The second form specifies the message string to
be output.
D.9 SceneGraphCycleException
This exception indicates that one of the live scene graphs attached to a viewable
Locale has a cycle in it. Java 3D scene graphs are directed acyclic graphs and, as
such, do not permit cycles. This exception is either thrown by the Java 3D ren-
derer at scene graph traversal time or when a scene graph containing a cycle is
made live (added as a descendant of a Locale object).
Constructors
public SceneGraphCycleException()
public SceneGraphCycleException(String str)
These create the exception object that outputs the exception message. The first
form uses the default message. The second form specifies the message string to
be output.
D.10 SingularMatrixException
This exception, in the javax.vecmath package, indicates that the inverse of a
matrix cannot be computed.
Constructors
public SingularMatrixException()
public SingularMatrixException(String str)
These create the exception object that outputs the exception message. The first
form uses the default message. The second form specifies the message string to
be output.
D.11 SoundException
This exception indicates a problem in loading or playing a sound sample.
Constructors
public SoundException()
public SoundException(String str)
These create the exception object that outputs the exception message. The first
form uses the default message. The second form specifies the message string to
be output.
T HIS appendix contains the Java 3D equations for fog, lighting, sound, and
texture mapping. Many of the equations use the following symbols:
⋅ Multiplication
• Function operator for sound equations,
Dot product for all other equations
C′ = C ⋅ f + C f ⋅ ( 1 – f ) (E.1)
The fog coefficient, f, is computed differently for linear and exponential fog. The
equation for linear fog is as follows:
B–z
f = ------------- (E.2)
B–F
f = e –d ⋅ z (E.3)
The parameters used in the fog equations are as follows:
diff i = ( L i • N ) ⋅ Lc i ⋅ Md (E.5)
shin
spec i = ( S i • N ) ⋅ Lc i ⋅ Ms (E.6)
2
atten i = 1 ⁄ ( K c i + K l i ⋅ d i + K q i ⋅ d i ) (E.7)
exp i
spot i = max ( ( – L i ⋅ D i ) ,0 ) (E.8)
Note: If the vertex is outside the spot light cone, as defined by the cutoff angle,
spoti is set to 0. For directional and point lights, spoti is set to 1.
This is a subset of OpenGL in that the Java 3D ambient and directional lights are
not attenuated and only ambient lights contribute to ambient lighting.
The parameters used in the lighting equation are as follows:
E = Eye vector
Ma = Material ambient color
Md = Material diffuse color
Me = Material emissive color
Ms = Material specular color
N = Vertex normal
shin = Material shininess
Ec = Vc
(E.10)
Ef = Vt + P
where
De π
P = ------- ⎛ --- – ( γ – α )⎞
2 ⎝2 ⎠
De Va
γ
α
Vt
Dh Vh
Vc
2. The signals from the sound source reaches both ears by indirect paths
around the head ( sinα < De ⁄ 2Dh ); see Figure E-2:
Ec = Vt + P′
(E.11)
Ef = Vt + P
where
De π
P = ------- ⎛ --- – ( γ – α )⎞
2 ⎝2 ⎠
De π
P' = ------- ⎛ --- – ( γ + α )⎞
2 ⎝2 ⎠
The time from the sound source to the closest ear is Ec ⁄ S , and the time from the
sound source to the farthest ear is Ef ⁄ S , where S is the current AuralAttribute
region’s speed of sound.
If the sound is closest to the left ear, then
IT D l = Ec ⁄ S
(E.12)
IT D r = Ef ⁄ S
IT D l = Ef ⁄ S
(E.13)
IT D r = Ec ⁄ S
De γ Va
α
γ Dh Vh Vt
Vt
P'
G i = Gi i ⋅ Gd i ⋅ Ga i ⋅ Gr i (E.15)
Note: For BackgroundSound sources Gdi = Gai = 1.0. For PointSound sources
Gai = 1.0.
F i = Fd i • Fa i (E.16)
Note: For BackgroundSound sources Fdi and Fai are identity functions. For
PointSound sources Fai is an identity function.
If the sound source is on the right side of the head, Ec is used for left G and F
calculations and Ef is used for right. Conversely, if the Sound source is on the
left side of the head, Ef is used for left calculations and Ec is used for right.
Attenuation
For sound sources with a single distanceGain array defined, the intersection
points of Vh (the vector from the sound source position through the listener’s
position) and the spheres (defined by the distanceGain array) are used to find the
index k where dk ≤ L ≤ dk+1. See Figure E-3.
A = (dk, Gdk)
Vh
B = (dk+1, Gdk+1)
C = (αk, Gak)
B D = (αk+1, Gak+1)
D Listener
A
C
α
For ConeSound sources with two distanceGain arrays defined, the intersection
points of Vh and the ellipsi (defined by both the front and back distanceGain
arrays) closest to the listener’s position are used to determine the index k. See
Figure E-4.
A = (d1, Gdk)
B = (d2, Gdk+1)
C = (αk, Gak)
D = (αk+1, Gak+1)
Vh
B
D Listener
A
C
α
backDistanceAttenuation[] frontDistanceAttenuation[]
( Gd k + 1 – Gd k ) ⋅ ( d 2 – d 1 )
Gd = Gd k + --------------------------------------------------------------- (E.17)
L – d1
Angular attenuation for both the spherical and elliptical cone sounds is identical.
The angular distances in the attenuation array closest to α are found and define
the index k into the angular attenuation array elements. The equation for the
angular gain is
( Ga k + 1 – Ga k ) ⋅ ( α k + 1 – α k )
Ga = Ga k + ---------------------------------------------------------------------- (E.18)
α – αk
Filtering
Similarly, the equations for calculating the AuralAttributes distance filter and the
ConeSound angular attenuation frequency cutoff filter are
( Fd k + 1 – Fd k ) ⋅ ( d 2 – d 1 )
Fd = Fd k + -------------------------------------------------------------
- (E.19)
L – d1
( Fa k + 1 – Fa k ) ⋅ ( α k + 1 – α k )
Fa = Fa k + --------------------------------------------------------------------
- (E.20)
α – αk
An N-pole lowpass filter may be used to perform the simple angular and distance
filtering defined in this version of Java 3D. These simple lowpass filters are
meant only as an approximation for full, FIR filters (to be added in some future
version of Java 3D).
S ( f )′ = S ( f ) – [ Ds ⋅ ( Dv ⁄ W ( f , Dh ) ) ] (E.21)
The parameters used in the Doppler effect equations are as follows:
t = Time
W = Wavelength of sound source based on frequency and distance
∑
j
Ri = [ ( Gr ⋅ Sample ( t ) i ) • D ( t + ( Tr ⋅ j ) ) ] (E.23)
j
Note that the reverberation calculation outputs the same image to both left and
right output signals (thus there is a single monaural calculation for each sound
reverberated). Correct first-order (early) reflections, based on the location of the
sound source, the listener, and the active AuralAttribute’s bounds, are not
required for this version of Java 3D. Approximations based on the reverberation
delay time, either suppled by the application or calculated as the average delay
time within the selected AuralAttribute’s application region, will be used.
The feedback loop is repeated until AuralAttribute’s reverberation feedback loop
count is reached or Grj ≤ 0.000976 (effective zero amplitude, –60 dB, using the
measure of –6 dB drop for every doubling of distance).
D = Delay function
fLoop = Reverberation feedback loop count
Gr = Reverberation coefficient acting as a gain scale-factor
I = Stereo image of unreflected sound sources
R = Reverberation for each sound sources
Sample = Sound digital sample with a specific sample rate, bit precision,
and an optional encoding and/or compression format
t = Time
Tr = Reverberation delay time (approximating first-order delay in the
AuralAttribute region)
I′ ( t ) l = I ( t ) l + [ D ( t ) • [ G ( P, α ) ⋅ I ( t ) r ] ] (E.24)
I′ ( t ) r = I ( t ) r + [ D ( t ) • [ G ( P, α ) ⋅ I ( t ) l ] ] (E.25)
The parameters used in the cross-talk equations, expanding on the terms used for
the equations for headphone playback, are as follows:
u = s ⋅ width
(E.26)
v = t ⋅ height
i = trunc ( u )
(E.27)
j = trunc ( v )
Ct = T i , j (E.28)
If the texture boundary mode is REPEAT, then only the fractional bits of s and t
are used, ensuring that both s and t are less than 1.
If the texture boundary mode is CLAMP, then the s and t values are clamped to be
in the range [0, 1] before being mapped into u and v values. Further, if s ≥ 1, then
i is set to width – 1; if t ≥ 1, then j is set to height – 1.
The parameters in the point-sampled texture lookup equations are as follows:
The above equations are used when the selected texture filter function—either
the minification or the magnification filter function—is BASE_LEVEL_POINT.
Java 3D selects the appropriate texture filter function based on whether the tex-
ture image is minified or magnified when it is applied to the polygon. If the tex-
ture is applied to the polygon such that more than one texel maps onto a single
pixel, then the texture is said to be minified and the minification filter function is
selected. If the texture is applied to the polygon such that a single texel maps
onto more than one pixel, then the texture is said to be magnified and the magni-
fication filter function is selected. The selected function is one of the following:
BASE_LEVEL_POINT, BASE_LEVEL_LINEAR, MULTI_LEVEL_POINT, or MULTI_
LEVEL_LINEAR. In the case of magnification, the filter will always be one of the
two base level functions (BASE_LEVEL_POINT or BASE_LEVEL_LINEAR).
If the selected filter function is BASE_LEVEL_LINEAR, then a weighted average of
the four texels that are closest to the sample point in the base level texture image
is computed.
i 0 = trunc ( u – 0.5 )
j 0 = trunc ( v – 0.5 )
(E.29)
i1 = i0 + 1
j1 = j0 + 1
α = frac ( u – 0.5 )
(E.30)
β = frac ( v – 0.5 )
Ct = ( 1 – α ) ⋅ ( 1 – β ) ⋅ T i 0, j0 + α ⋅ ( 1 – β ) ⋅ T i 1, j0
(E.31)
+ ( 1 – α ) ⋅ β ⋅ T i 0, j1 + α ⋅ β ⋅ T i 1, j1
C′ = Ct (E.32)
C′ = C ⋅ Ct (E.33)
C′ rg b = C rg b ⋅ ( 1 – Ct α ) + Ct rg b ⋅ Ct α
(E.34)
C′ α = Cα
C′ rg b = C rg b ⋅ ( 1 – Ct rg b ) + Cb rg b ⋅ Ct rg b
(E.35)
C′ α = C α ⋅ Ct α
Note that if the texture format is INTENSITY, alpha is computed identically to red,
green, and blue:
C′ α = C α ⋅ ( 1 – Ct α ) + Cb α ⋅ Ct α (E.36)
C = Color of the pixel being texture mapped (if lighting is enabled, then
this does not include the specular component)
Ct = Texture color
Cb = Blend color
Note that Crgb indicates the red, green, and blue channels of color C and that Cα
indicates the alpha channel of color C. This convention applies to the other color
variables as well.
If there is no alpha channel in the texture, a value of 1 is used for Ctα in BLEND
and DECAL modes.
When the texture mode is one of REPLACE, MODULATE, or BLEND, only certain of
the red, green, blue, and alpha channels of the pixel color are modified, depend-
ing on the texture format, as described below.
• INTENSITY: All four channels of the pixel color are modified. The inten-
sity value is used for each of Ctr, Ctg, Ctb, and Ctα in the texture applica-
tion equations, and the alpha channel is treated as an ordinary color
channel—the equation for C´rbg is also used for C´α.
• LUMINANCE: Only the red, green, and blue channels of the pixel color
are modified. The luminance value is used for each of Ctr, Ctg, and Ctb in
the texture application equations. The alpha channel of the pixel color is
unmodified.
• ALPHA: Only the alpha channel of the pixel color is modified. The red,
green, and blue channels are unmodified.
• LUMINANCE_ALPHA: All four channels of the pixel color are modified.
The luminance value is used for each of Ctr, Ctg, and Ctb in the texture ap-
plication equations, and the alpha value is used for Ctα.
• RGB: Only the red, green, and blue channels of the pixel color are modi-
fied. The alpha channel of the pixel color is unmodified.
• RGBA: All four channels of the pixel color are modified.
VRML 2.0 file is specified via routes: Java 3D does not include a similar struc-
ture.
F.2.2 An Approach
Developers can host a VRML 2.0 file within a Java 3D environment by con-
structing exactly the same scene graph structure as specified in a VRML 2.0 file.
They can then use the Java 3D behavior system or its mixed-mode callback fea-
tures to implement field value propagation. The remainder of this section defines
one such approach.
The thin-layer objects (VRML nodes) exist only to translate VRML semantics
into appropriate actions at runtime. If a VRML scene has subgraphs that cannot
be accessed during runtime (because no routes connect into nodes within that
subgraph) then the developer may choose not to retain—or even create—these
thin-layer VRML node objects, constructing only the underlying Java 3D
objects.
F.2.3 A Browser
Much like implementing a VRML 1.0 browser, a developer can implement a
VRML 2.0 browser using Java. A VRML 2.0 browser includes specific function-
ality, such as Viewpoint binding and a browser interface for use by Script nodes.
Most of this latter functionality involves interaction with the thin-layer VRML
node objects, but the browser may call Java 3D directly as well. For example, a
VRML browser developer can implement VRML Viewpoints by associating
Java 3D ViewPlatform nodes with each VRML Viewpoint. Then, to change
VRML Viewpoints, the browser would detach the Java 3D View object from the
current Java 3D ViewPlatform object and reattach it to the new Java 3D
ViewPlatform object associated with the desired VRML Viewpoint.
avatar
The software representation of a person as the person appears to others in a
shared virtual universe. The avatar may or may not resemble an actual person.
branch graph
A graph rooted to a BranchGroup node. See also scene graph and shared
graph.
CC
Clipping coordinates.
center ear
Midpoint between left and right ears of listener.
center eye
Midpoint between left and right eyes of viewer. This is the head coordinate
system origin.
compiled
A subgraph may be compiled by an application using the compile method of
the root node—a BranchGroup or a SharedGroup—of the graph. A compiled
object is any object that is part of a compiled graph. An application can com-
pile some or all of the subgraphs that make up a complete scene graph. Java 3D
compiles these graphs into an internal format. Additionally, Java 3D provides
restricted access to methods of compiled objects or graphs. See also live.
compiled-retained mode
One of three modes in which Java 3D objects are rendered. In this mode,
Java 3D renders the scene graph, or a portion of the scene graph, that has been
previously compiled into an internal format. See also retained mode, immediate
mode.
DAG
Directed acyclic graph. A scene graph.
EC
Eye coordinates.
frustum
See view frustum.
group node
A node within a scene graph that composes, transforms, selects, and in general
modifies its descendant nodes. See also leaf node, root node.
HMD
Head-mounted display.
image plate
The display area; the viewing screen or head-mounted display.
immediate mode
One of three modes in which Java 3D objects are rendered. In this mode
objects are rendered directly, under user control, rather than as part of a scene
graph traversal. See also retained mode, compiled-retained mode.
IID
Interaural intensity difference. The difference between the perceived amplitude
(gain) of the signal from a source as it reaches the listener’s left and right ears.
ITD
Interaural time difference. The difference in time in the arrival of the signal
from a sound source as it reaches the listener’s left and right ears.
leaf node
A node within a scene graph that contains the visual, auditory, and behavioral
components of the scene. See also group node, root node.
live
A live graph is any graph that is attached to a Locale object, or a shared graph
that is referenced by a live graph. A live object is any object that is part of a
live graph. Live objects are subject to being traversed and rendered by the
472 Java 3D API Specification
Java 3D renderer. Additionally, Java 3D provides restricted access to methods
of live objects or graphs. See also compiled.
LOD
Level of detail. A predefined Behavior that operates on a Switch node to select
from among multiple versions of an object or collection of objects.
polytope
A bounding volume defined by a closed intersection of half-spaces.
retained mode
One of three modes in which Java 3D objects are rendered. In this mode,
Java 3D traverses the scene graph and renders the objects that are in the graph.
See also compiled-retained mode, immediate mode.
root node
A node within a scene graph that establishes the default environment. See also
group node, leaf node.
scene graph
A collection of branch graphs rooted to a Locale. A virtual universe has one or
more scene graphs. See also branch graph and shared graph.
shared graph
A graph rooted to a SharedGroup node. See also branch graph and scene
graph.
stride
The part of an interleaved array that defines the length of a vertex.
three space
Three-dimensional space.
view frustum
A truncated, pyramid-shaped viewing area that defines how much of the world
the viewer sees. Objects not within the view frustum are not visible. Objects
that intersect the boundaries of the viewing frustum are clipped (partially
drawn).
VPC
View platform coordinates.
W
w flag X
Tuple4b, 320 x flag
Tuple4d, 322 AxisAngle4d, 338
Tuple4f, 329 AxisAngle4f, 340
wakeup Tuple2f, 300
conditions, 220, 223 Tuple3b, 306
criterion, 221 Tuple3d, 308
WakeupAnd object, 235 Tuple3f, 313
WakeupAndOfOrs object, 236 Tuple4b, 320
WakeupCondition object, 226 Tuple4d, 322
WakeupCriterion object, 222, 226 Tuple4f, 329
wakeupOn method, 225
Y
y flag
AxisAngle4d, 338
AxisAngle4f, 340
Tuple2f, 300
Tuple3b, 306
Tuple3d, 308
Tuple3f, 313
Tuple4b, 320
Tuple4d, 322
Tuple4f, 329
yOffset constant, 184
Z
z flag
AxisAngle4d, 338
AxisAngle4f, 340
Tuple3b, 306
Tuple3d, 308
Tuple3f, 313
Tuple4b, 320
Tuple4d, 322
Tuple4f, 329
ZERO flag, 152
zero method
GMatrix, 377
GVector, 343
zOffset constant, 184