Marker Tracking and HMD Calibration For A Video-Based Augmented Reality Conferencing System

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Marker Tracking and HMD Calibration

for a Video-based Augmented Reality Conferencing System

Hirokazu Kato1 and Mark Billinghurst2


1
Faculty of Information Sciences, Hiroshima City University
2
Human Interface Technology Laboratory, University of Washington
[email protected], [email protected]

Abstract
We describe an augmented reality conferencing system which CSCW; in this setting computers can provide the same type
uses the overlay of virtual images on the real world. Remote of collaborative information that people have in face-to-face
collaborators are represented on Virtual Monitors which interactions, such as communication by object manipulation,
can be freely positioned about a user in space. Users can voice and gesture [1]. Work on the DIVE project [2],
collaboratively view and interact with virtual objects using GreenSpace [3] and other fully immersive multi-participant
a shared virtual whiteboard. This is possible through precise virtual environments has shown that collaborative work is
virtual image registration using fast and accurate computer indeed intuitive in such surroundings. However most current
vision techniques and HMD calibration. We propose a multi-user VR systems are fully immersive, separating the
method for tracking fiducial markers and a calibration user from the real world and their traditional tools.
method for optical see-through HMD based on the marker As Grudin [4] points out, CSCW tools are generally
tracking. rejected when they force users to change the way they work.
This is because of the introduction of seams or discontinuities
1. Introduction between the way people usually work and the way they are
forced to work because of the computer interface. Ishii
describes in detail the advantages of seamless CSCW
Computers are increasingly used to enhance collaboration
interfaces [5]. Obviously immersive VR interfaces introduce
between people. As collaborative tools become more
a huge discontinuity between the real and virtual worlds.
common the Human-Computer Interface is giving way to a
An alternative approach is through Augmented Reality
Human-Human Interface mediated by computers. This
(AR), the overlaying of virtual objects onto the real world.
emphasis adds new technical challenges to the design of
In the past researchers have explored the use of AR
Human Computer Interfaces. These challenges are
approaches to support face-to-face collaboration. Projects
compounded for attempts to support three-dimensional
such as Studierstube [6], Transvision [7], and AR2 Hockey
Computer Supported Collaborative Work (CSCW).
[8] allow users can see each other as well as 3D virtual
Although the use of spatial cues and three-dimensional object
objects in the space between them. Users can interact with
manipulation are common in face-to-face collaboration, tools
the real world at the same time as the virtual images, bringing
for three-dimensional CSCW are still rare. However new
the benefits of VR interfaces into the real world and
3D interface metaphors such as virtual reality may overcome
facilitating very natural collaboration. In a previous paper
this limitation.
we found that this meant that users collaborate better on a
Virtual Reality (VR) appears a natural medium for 3D
task in a face-to-face AR setting than for the same task in a
fully immersive Virtual Environment [9]. collaboration between a desk bound expert and a remote
We have been developing a AR conferencing system that field worker. The user with the AR head mounted interface
allows virtual images (Virtual Monitors) of remote can see video images from desktop users and be supported
collaborators to be overlaid on the users real environment. by them. Remote desktop users can see the video images
Our Augmented Reality conferencing system tries to that the small camera of the AR user grabs and give support
overcomes some of the limitations of current desktop video to the AR user. This system dose not support video
conferencing, including the lack of spatial cues [10], the communication among desktop users. If necessary, however,
difficulty of interacting with shared 3D data, and the need they could simultaneously execute a traditional video
to be physically present at a desktop machine to conference. communication application. In this section we first describe
While using this system, users can easily change the the AR head mounted interface and then the desktop
arrangement of Virtual Monitors, placing the virtual images interface.
of remote participants about them in the real world and they
can collaboratively interact with 2D and 3D information 2.1. Augmented Reality Interface
using a Virtual Shared Whiteboard. The virtual images are
shown in a lightweight head mounted display, so with a The user with the AR interface wears a pair of the Virtual
wearable computer our system could be made portable i-O iglasses HMD that have been modified by adding a small
enabling collaboration anywhere in the workplace. color camera. The iglasses are full color, can be used in
In developing a multi-user augmented reality video either a see-through or occluded mode and have a resolution
conferencing system, precise registration of the virtual of 263x234 pixels. The camera output is connected to an
images with the real world is one of the greatest challenges. SGI O2 (R5000SC 180MHz CPU) computer and the video
In our work we use computer vision techniques and have out of the SGI is connected back into the head mounted
developed some optimized algorithms for fast, accurate real display. The O2 is used for both image processing of video
time registration and convenient optical see-through HMD from the head mounted camera and virtual image generation
calibration. In this paper, after introducing our conferencing for the HMD. Performance speed is 7-10 frames per sec for
application, we describe the video-based registration and full version, 10-15 fps running without the Virtual Shared
calibration methods used. Whiteboard.
The AR user also has a set of small marked cards and a
2. System overview larger piece of paper with six letters on it around the outside.
There is one small marked card for each remote collaborator
Our prototype system supports collaboration between a with their name written on it. These are placeholders (user
user wearing see-through head mounted displays(HMD) and ID cards) for the Virtual Monitors showing the remote
those on more traditional desktop interfaces as shown in collaborators, while the larger piece of paper is a placeholder
figure 1. This simulates the situation that could occur in for the shared white board. To write and interact with virtual
objects on the shared whiteboard the user has a simple light
pen consisting of an LED, switch and battery mounted on a
pen. When the LED touches a surface the switch is tripped
and it is turned on. Figure 2 shows an observer's view of
the AR user using the interface.
The software components of the interface consist of two
AR user with an optical see-through
head mounted display and a camera
parts, the Virtual Monitors shown on the user ID cards, and
Desktop computer users the Virtual Shared Whiteboard. When the system is running,
with a camera
computer vision techniques are used to identify specific user
Figure 1. System configuration.
Figure 2. Using the Augmented Reality Interface. Figure 4. Virtual Shared White Board.
their own annotations by touching one corner of the card.
Currently our application only supports virtual annotations
aligned with the surface of the card, but we are working on
adding support for shared 3D objects.
The position and pose of this paper board can be estimated
by using the same vision methods used for the virtual
monitors. However, since the user's hands often occlude the
registration markers, the estimation has to be done by using
only visible markers. We can reliably estimate the card
position using only one of the six markers. The LED light-
pen is on while it touches the paper board. When this happens
Figure 3. Remote user representation in the AR interface. the system estimates the position of the pen tip relative to
the paper board from the 2D position of the LED in the
ID cards (using the user name on the card) and display live camera image and the knowledge that the tip of the pen is
video of the remote user that corresponds to the ID card. contact with the board. Users can pick up the card for a
Vision techniques are also used to calculate head position closer to look at the images on the virtual whiteboard, and
and orientation relative to the cards so the virtual images can position it freely within their real workspace.
are precisely registered with the ID cards. Figure 3 shows
an example of a Virtual Monitor, in this case the user is 2.2. Desktop Interface
holding an ID card which has live video from a remote
collaborator attached to it. The AR user collaborates with remote desktop users that
Shared whiteboards are commonly using in collaborative have a more traditional interface. The desktop users are on
applications to enables people to share notes and diagrams. networked SGI computers. Users with video cameras on
In our application we use a Virtual Shared Whiteboard as their computer see a video window of the video image that
seen in figure 4. This is shown on a larger paper board with their camera is sending, the remote video from the AR head
six similar registration markings as the user ID cards. Virtual mounted camera and a share white board application. The
annotations written by remote participants are displayed on video from the AR user's head mounted camera enables the
it, exactly aligned with the plane of the physical card. The desktop user to collaborate more effectively with them on
local participant can use the light-pen to draw on the card real world tasks. They can freely draw on the shared white
and add their own annotations, which are in turn displayed board using the mouse, and whiteboard annotations and
and transferred to the remote desktops. The user can erase video frames from their camera are send to the AR user.
3. Video-based registration techniques used vision techniques to identify 2D matrix markers[14].
Klinker used square markers for fast tracking[15]. Our
Our AR conferencing interface relies heavily on computer approach is similar to this method.
vision techniques for ID recognition and user head position We have developed a precise registration method for the
and pose determination. In the remainder of the paper we optical see-through augmented reality system. Our method
outline the underlying computer vision methods we have overcomes two primary problems; calibration of the HMD
developed to accomplish this. These methods are general and camera, and estimating an accurate position and pose
enough to be applicable for a wide range of augmented of fiducial markers. We first describe a position and pose
reality applications. estimation method, and then HMD and camera calibration
Augmented Reality Systems using HMDs can be method, because our HMD calibration method is based on
classified into two groups according to the display method the fiducial marker tracking.
used:
Type A: Video See-through Augmented Reality 4. Position and pose estimation of markers
Type B: Optical See-through Augmented Reality
In type A, virtual objects are superimposed on a live video 4.1. Estimation of the Transformation Matrix
image of the real world captured by the camera attached to
the HMD. The resulting composite video image is displayed Size-known square markers are used as a base of the
back to both eyes of the user. In this case, interaction with coordinates frame in which Virtual Monitors are represented
the real world is a little unnatural because the camera (Figure 5). The transformation matrices from these marker
viewpoint shown in the HMD is offset from that of the user's coordinates to the camera coordinates (Tcm) represented in
own eyes, and the image is not stereographic. Performance eq.1 are estimated by image analysis.
can also be significantly affected as the video frame rate
drops. However, this type of system can be realized easily,
 Xc  V11 V12 V13 Wx   Xm 
because good image registration only requires that the  Y  V V V23 Wy   Ym 
relationship between 2D screen coordinates on the image  c  =  21 22  
 Zc  V31 V32 V33 Wz   Zm 
and 3D coordinates in the real world is known. 1 0 1   1 
   0 0
In type B, virtual objects are shown directly on the real  Xm   Xm 
  Y  (eq. 1)
world by using a see-through display. In this case, the user  V3 × 3 W3 × 1  Ym
  = Tcm  m 
= 
can see the real world directly and stereoscopic virtual 0 0 0 1   Zm   Zm 
1 1
   
images can be generated so the interaction is very natural.
However, the image registration requirements are a lot more
challenging because it requires the relationships between
Camera Screen Camera Coordinates
the camera, the HMD screens and the eyes to be known in Coordinates (Xc, Yc, Zc)

addition to the relationships used by type A systems. The


calibration of the system is therefore very important for (xc, yc)
precise registration.
Azume reported a good review of the issues faced in
augmented reality registration and calibration[11]. Also Marker
many registration techniques have been proposed. State Marker Coordinates
proposed a registration method using stereo images and a (Xm, Ym, Zm)

magnetic tracker[12]. Neumann used a single camera and Figure 5. The relationship between marker coordinates and
multiple fiducial markers for robust tracking[13]. Rekimoto the camera coordinates is estimated by image analysis.
After thresholding of the input image, regions whose v1 u1
outline contour can be fitted by four line segments are
extracted. Parameters of these four line segments and
coordinates of the four vertices of the regions found from
the intersections of the line segments are stored for later
u2
processes.
The regions are normalized and the sub-image within
v2
the region is compared by template matching with patterns Figure 6. Two perpendicular unit direction
that were given the system before to identify specific user vectors: v1, v2 are calculated from u1 and u2.
ID markers. User names or photos can be used as identifiable square is given by the outer product n 1 × n 2 . Given that
patterns. For this normalization process, eq.2 that represents
two unit direction vectors that are obtained from two sets of
a perspective transformation is used. All variables in the
two parallel sides of the square is u1 and u2, these vectors
transformation matrix are determined by substituting screen
should be perpendicular. However, image processing errors
coordinates and marker coordinates of detected marker's four
mean that the vectors won't be exactly perpendicular. To
vertices for (xc, yc) and (Xm, Ym) respectively. After that, the
compensate for this two perpendicular unit direction vectors
normalization process can be done by using this
are defined by v1 and v2 in the plane that includes u1 and u2
transformation matrix.
as shown in figure 6. Given that the unit direction vector
hxc   N11 N12 N13   Xm  which is perpendicular to both v1 and v2 is v3, the rotation
hy  =  N N22 N23   Ym 
 c   21   (eq. 2) component V 3x3 in the transformation matrix Tcm from
 h   N31 N32 1   1 
marker coordinates to camera coordinates specified in eq.1
When two parallel sides of a square marker are projected is [V1t V2t V3t].
on the image, the equations of those line segments in the Since the rotation component V3x3 in the transformation
camera screen coordinates are the following: matrix was given, by using eq.1, eq.4, the four vertices

a1 x + b1 y + c1 = 0 , a2 x + b2 y + c2 = 0
(eq. 3) coordinates of the marker in the marker coordinate frame
For each of markers, the value of these parameters has and those coordinates in the camera screen coordinate frame,
been already obtained in the line-fitting process. Given the eight equations including translation component Wx Wy Wz
perspective projection matrix P that is obtained by the are generated and the value of these translation component
camera calibration in eq.4, equations of the planes that Wx Wy Wz can be obtained from these equations.
include these two sides respectively can be represented as The transformation matrix found from the method
eq.5 in the camera coordinates frame by substituting xc and mentioned above may include error. However this can be
yc in eq.4 for x and y in eq.3. reduced through the following process. The vertex
coordinates of the markers in the marker coordinate frame
 P11 P12 P13 0 hxc   Xc  can be transformed to coordinates in the camera screen
0 P22 P23 0 hy  Y 
P=    = P c 
c coordinate frame by using the transformation matrix
0 0 1 0 ,  h   Zc  (eq. 4)
0  1  obtained. Then the transformation matrix is optimized as
 0 0 1   
1
 
sum of the difference between these transformed coordinates
and the coordinates measured from the image goes to a
a1 P11 Xc + ( a1 P12 + b1 P22 )Yc + ( a1 P13 + b1 P23 + c1 ) Zc = 0
a2 P11 Xc + ( a2 P12 + b2 P22 )Yc + ( a2 P13 + b2 P23 + c2 ) Zc = 0
(eq. 5) minimum. Though there are six independent variables in
the transformation matrix, only the rotation components are
optimized and then the translation components are
Given that normal vectors of these planes are n1 and n2
reestimated by using the method mentioned above. By
respectively, the direction vector of parallel two sides of the
iteration of this process a number of times the transformation
matrix is more accurately found. It would be possible to fitting, each line equation is calculated by using all the
deal with all of six independent variables in the optimization contour information that is on the extracted sides.
process. However, computational cost has to be considered. Furthermore by using all the equations of the detected
parallel lines, the direction vectors are estimated and the
4.2. An Extension for the Virtual Shared White board orientation is found.
Board
4.3. Pen Detection
The method described for tracking user ID cards is
extended for tracking the shared whiteboard card. There are The light-pen is on while touching the shared whiteboard
six markers in the Virtual Shared White Board, aligned board. Estimation of the pen tip location is found in the
around the outside of the board as shown in figure 7. The following way. First, the brightest region in the image is
orientation of the White Board is found by fitting lines extracted and the center of the gravity is detected. If
around the fiducial markers and using an extension of the brightness and area of the regions are not satisfied with
technique described for tracking user ID cards. heuristic rules, the light-pen is regarded as turned off status.
Using all six markers to find the board orientation and Since pen position (Xw, Yw, Zw) is expressed relative to
align virtual images in the interior produces very good the Virtual Shared Whiteboard it is detected in the
registration results. However, when a user draws a virtual whiteboard coordinate frame. The relationship between the
annotation, some markers may be occluded by user's hands, camera screen coordinates and the whiteboard coordinates
or they may move their head so only a subset of the markers is given by eq.6. (xc, yc) is a position of the center of gravity
are in view. The transformation matrix for Virtual Shared that is detected by image processing. Also Zw is equal to
White Board has to be estimated from visible markers so zero since pen is on the board. By using these values in
errors are introduced when fewer markers are available. To eq.6, two equations including Xw and Yw as variables are
reduce errors the line fitting equations are found by both generated and their values are calculated easily by solving
considering individual markers and sets of aligned markers. these equations.
Each marker has a unique letter in its interior that enables
hxc   Xc  V11 V12 V13 Wx   Xw 
the system to identify markers which should be horizontally hy  Y  V V V23 Wy   Yw 
  = P   = P  21 22
c c
 
or vertically aligned and so estimate the board rotation.  h   Zc  V31 V32 V33 Wz   Zw  (eq. 6)
Though line equations in the camera screen coordinates  1  1 0 1   1 
     0 0
frame are independently generated for each of markers, the
alignment of the six markers in Virtual Shared White Board
means that some line equations are identical. Therefore by 5. HMD and Camera Calibration
extracting all aligned sides from visible markers for the line-
In an optical see-through HMD, a ray from a physical
White Board Coordinates Frame object reaches the focal point of the eye through the HMD
(Xw, Yw, Zw)
screen. Then, a 3D position represented in the eye
coordinates whose origin is the focal point of the eye can be
projected on the HMD screen coordinates by the perspective
projection model. This assumes that the Z axis
perpendicularly crosses the HMD screen, and the X and Y
axes are parallel to X and Y axes of the HMD screen
coordinates frame respectively.
Figure 8 shows coordinates frames in our calibration
Figure 7. Layout of markers on the Shared White Board.
HMD Screen Eye Coordinates
lines for the camera calibration. Coordinates of all cross
Coordinates (Xe, Ye, Ze) points of a grid are known in the cardboard local 3D
Camera
Coordinates coordinates. Also those in the camera screen coordinates
Camera Screen
(Xc, Yc, Zc)
(xs, ys)
Coordinates can be detected by image processing after the cardboard
image is grabbed. Many pairs of the cardboard local 3D
(xc, yc)
coordinates (Xt, Yt, Zt) and the camera screen coordinates
(xc, yc) are used for finding the perspective transformation
matrix P.
Marker Coordinates
(Xm, Ym, Zm) The relationships among the camera screen coordinates
(xc, yc), the camera coordinates (Xc, Yc, Zc) and the cardboard
Figure 8. Coordinates frames in our calibration procedure.
coordinates (Xt, Yt, Zt) can be represented as:
procedure. As mentioned in section 4, position and pose
hxc   Xc   Xt   Xt  C11 C12 C13 C14   Xt 
estimation of a marker is done by calculating the hy  Y  Y   Y  C C22 C23 C24   Yt 
  = P   = P ⋅ Tct   = C t  =  21
c c t
 
transformation matrix from marker coordinates to camera  h   Zc   Zt   Zt  C31 C32 C33 1   Zt 
coordinates: Tcm (eq.1). The perspective projection matrix  1  1 1 1  0 1   1 
         0 0
P (eq.4) is required in this procedure. Camera calibration is
to find the perspective projection matrix P that represents
sx f 0 x0 0  R11 R12 R13 Tx 
the relationship between the camera coordinates and the  0 sy f y0 0 R R22 R23 Ty 
P=  Tct =  21 
camera screen coordinates.  0 0 1 0 ,  R31 R32 R33 Tz  (eq. 8)
 0  0 1 
In order to display virtual objects on HMD screen as if  0 0 1  0 0
those are on the marker, the relationship between the marker
where P is the perspective transformation matrix which
coordinates and the HMD screen coordinates is required.
should be found here, f is the focal length, sx is the scale
Relationship between HMD screen coordinates and eye
factor [pixel/mm] in direction of x axis, sy is the scale factor
coordinates is represented by the perspective projection.
in direction of y axis, (x0, y0) is the position that Z axis of
Also, relationship between camera coordinates and eye
the eye coordinates frame passes, T ct represents the
coordinates is represented by rotation and translation
translation and rotation transformation from the cardboard
transformations. eq.7 shows those relationship.
coordinates to the camera coordinates and C is the
transformation matrix obtained by combining P and Tct.
ixs   Xe   Xc   Xm 
iy  Y  Y  Y  Since many pairs of (xc, yc) and (Xt , Yt , Zt) have been
 s  = Q se  e  = Q se Tec  c  = Q se Tec Tcm  m 
i   Ze   Zc   Zm  obtained by the procedure mentioned above, matrix C can
1 1 1 1
        (eq. 7) be estimated. However, the matrix C cannot be decomposed
Q se : Perspective transformation matrix into P and Tct in general because matrix C has 11 independent
Tec : Rotation and translation matrix
variables but matrices P and Tct have 4 and 6 respectively,
so the sum of the independent variables of P and Tct is not
Matrix Tcm representing the transformation from marker equal to the one of C. A scalar variable k is added into P to
coordinates to camera coordinates is obtained in use of the make these numbers equal as the following:
system as mentioned in Section 4. HMD calibration is
therefore to find the matrix QseTec for both of eyes. hxc  sx f k x0 0   R11 R12 R13 Tx   Xt 
hy   0 sy f y0 0   R21 R22 R23 Ty   Yt 
 c =    
 h   0 0 1 0   R31 R32 R33 Tz   Zt  (eq. 9)
5.1. Camera Calibration - Finding the matrix P  1   0 1   0 1   1 
   0 0 0 0

We use a simple cardboard frame with a ruled grid of


As a result, the matrix C can be decomposed into P and Obviously our calibration method dose not need this kind
Tct. The variable k means the slant between x-axis and y- of constrains. So this calibration method can be used
axis and should be zero ideally but it may be a small noise conveniently.
value.
6. Evaluation of registration and calibration
5.2 HMD Calibration - Finding the matrix QseTec
6.1. Accuracy of the marker detection
Formulation of the matrix QseTec is same as one of the
matrix C in eq.8. Therefore many pairs of the coordinates In order to evaluate accuracy of the marker detection,
(x s , y s ) and (X c , Y c , Z c) can be used for finding the detected position and pose were recorded while the square
transformation matrix combining Qse and Tec. In order to marker with 80[mm] of side length was moved in depth
obtain such kinds of data, we use marker tracking technique direction with some slants. Figure 10 shows errors of
introduced in section 4. position. Figure 11 shows detected slant. This result shows
HMD calibration procedure is done for each eye. A cross- that accuracy decreases the further the cards are from the
hair cursor is displayed on the corresponding HMD screen. camera.
The user handles a fiducial marker and fits its center on the
cross-hair cursor as shown in figure 9. The fiducial marker 30
is simultaneously observed by the camera attached on the
HMD and the central coordinates are detected in the camera 20
slant [deg]
coordinates. While the user manipulates the marker from 0
error [mm]

10
near side to far side, some marker positions are stored by 30

clicking a mouse button. In this procedure, positions of the 45


0
cross-hair cursor mean HMD screen coordinates (xs, ys) and 60

marker positions mean camera coordinates (Xc, Yc, Zc). After -10 85
iterating this operation in some positions of cross-hair cursor,
many pairs of (xs, ys) and (Xc, Yc, Zc) are obtained. At last -20
0 100 200 300 400 500 600 700
the transformation matrix combining Qse and Tec is found.
distance [mm]
Some calibration methods for optical see-through HMD
have been proposed. However, most of those require that Figure 10. Errors of position.
users hold their head position during the calibration[15].
90
This constraint is a cause of difficulties of HMD calibration.
75 slant [deg]
0
60
30
slant [deg]

45 45

60
30
85
15

0
0 100 200 300 400 500 600 700

distance [mm]

Figure 11. Detected slant.


Figure 9. HMD calibration.
6.2. Evaluation of HMD calibration improvement of the registration by using calibrated
parameters.
Our HMD calibration method was evaluated by using a
program that displays a square of same size as the marker 7. Conclusions
on it. A user with HMD looks at a displayed square on the
marker and reports the deviation of a displayed square from In this paper we have described a new Augmented Reality
the marker. This evaluation was done for 3 tasks: conferencing application and the computer vision techniques
Task 1: holding the marker. used in the application. Our computer vision methods give
(Eye-marker distance: 300mm) good results when the markers are close to the user, but
Task 2: putting the marker on a desk. accuracy decreases the further the cards are from the camera.
(Eye-marker distance: 400mm) Also our HMD calibration method which does not require a
Task 3: putting the marker far away on a desk. non-moving user give good results without user's patience.
(Eye-marker distance: 800mm) In future, we will improve this AR conferencing prototype
Also we had 3 conditions: and execute user testing for its evaluation as a
(a) Evaluation with standard parameters. communication system.
(b) Evaluation with calibrated parameters.
(c) Evaluation with calibrated parameters, but user took References
off the HMD once after calibration.
Standard parameters mean ones which had been [1] A. Wexelblat, "The Reality of Cooperation: Virtual
calibrated by another user. 10 times cross-hair cursor fittings Reality and CSCW", Virtual Reality: Applications and
were done for each eye. Table 1 shows results of this user Explorations. Edited by A. Wexelblat. Boston, Academic
testing. This result seems to be good. However, it includes a Publishers, 1993.
problem: Focal point of the HMD is on 2-3[m] distance, but [2] C. Carlson, and O. Hagsand, "DIVE - A Platform for
a user have to see a virtual object on 300-800[mm] distance. Multi-User Virtual Environments", Computers and
Hereby the user see the virtual object out of focus. This Graphics, Nov/Dec 1993, Vol. 17(6), pp. 663-669.
means that reporting a precise deviation is very difficult [3] J. Mandeville, J. Davidson, D. Campbell, A. Dahl, P.
because of this defocused situation. As a result, test user Schwartz, and T. Furness, "A Shared Virtual Environment
might report good-will answer. However, we can see the for Architectural Design Review", CVE '96 Workshop
Proceedings, 19-20th September 1996, Nottingham,
Table 1. Results of user testing. Great Britain.
time task 1 task 2 task 3 [4] J. Grudin, "Why CSCW applications fail: Problems in
user condition
(min) (mm) (mm) (mm) the design and evaluation of organizational interfaces",
(a) 20 20 20 Proceedings of CSCW '88, Portland, Oregon, 1988, New
A 3 (b) 0 0 5
(c) 3 3 6 York: ACM Press, pp. 85-93.
(a) 20 30 35 [5] H. Ishii, M. Kobayashi, K. Arita, "Iterative Design of
B 2 (b) 0 5 5
(c) 0 5 5 Seamless Collaboration Media", Communications of the
(a) 2 2 2 ACM, Vol 37, No. 8, August 1994, pp. 83-97.
C 2 (b) 2 2 2
(c) 0 2 2 [6] D. Schmalsteig, A. Fuhrmann, Z. Szalavari, M. Gervautz,
(a) 5 5 10 "Studierstube - An Environment for Collaboration in
D 2 (b) 0 0 0
(c) 0 1 2 Augmented Reality", CVE '96 Workshop Proceedings,
(a) 10 10 10 19-20th September 1996, Nottingham, Great Britain.
E 2 (b) 0 0 0
(c) 1 2 3
[7] J. Rekimoto, "Transvision: A Hand-held Augmented
Reality System for Collaborative Design", Proceeding Livingston, "Superior Augmented Reality Registration
of Virtual Systems and Multimedia '96 (VSMM '96), by Integrating Landmark Tracking and magnetic
Gifu, Japan, 18-20 Sept., 1996. Tracking", Proceedings of SIGGRAPH96, pp.429-446,
[8] T. Ohshima, K. Sato, H. Yamamoto, H. Tamura, 1996.
"AR2Hockey:A case study of collaborative augmented [13] U. Neumann, S. You, Y. Cho, J. Lee, J. Park,
reality", Proceedings of VRAIS'98, pp.268-295 1998. "Augmented Reality Tracking in Natural Environments",
[9] M. Billinghurst, S. Weghorst, T. Furness, "Shared Space: Mixed Reality - Merging Real and Virtual Worlds (Ed.
An Augmented Reality Approach for Computer by Y. Ohta and H. Tamura), Ohmsha and Springer-Verlag,
Supported Cooperative Work", Virtual Reality Vol. 3(1), pp.101-130, 1999.
1998, pp. 25-36. [14] J. Rekimoto, "Matrix: A Realtime Object Identification
[10] A. Sellen, "Speech Patterns in Video-Mediated and Registration Method for Augmented Reality",
Conversations", Proceedings CHI '92, May 3-7, 1992, Proceedings of Asia Pacific Computer Human Interaction
ACM: New York , pp. 49-59. 1998 (APCHI'98), Japan, Jul. 15-17, 1998.
[11] R. Azuma, "SIGGRAPH95 Course Notes: A Survey of [15] G. Klinker, D. Stricker, D. Reiners, "Augmented
Augmented Reality", Los Angeles, Association for Reality: A Balancing Act Between High Quality and
Computing Machinery, 1995. Real-Time Constraints", Proceedings of ISMR '99, 1999,
[12] A. State, G. Hirota, D. T. Chen, W. F. Garrett, M. A. pp.325-346.

You might also like