0% found this document useful (0 votes)
129 views9 pages

The MagicBook A Transitional AR Interface

The MagicBook is an interface that uses a real book to seamlessly transition users between reality and virtual reality. A vision-based tracking method overlays virtual models onto real book pages, creating an augmented reality scene. Users can then "fly inside" virtual scenes and experience them as immersive virtual reality. The interface also allows multiple users to collaboratively experience the same virtual environment from either an egocentric or exocentric perspective. The paper describes the MagicBook prototype, potential applications, and user feedback.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
129 views9 pages

The MagicBook A Transitional AR Interface

The MagicBook is an interface that uses a real book to seamlessly transition users between reality and virtual reality. A vision-based tracking method overlays virtual models onto real book pages, creating an augmented reality scene. Users can then "fly inside" virtual scenes and experience them as immersive virtual reality. The interface also allows multiple users to collaboratively experience the same virtual environment from either an egocentric or exocentric perspective. The paper describes the MagicBook prototype, potential applications, and user feedback.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Computers & Graphics 25 (2001) 745–753

The MagicBook: a transitional AR interface


Mark Billinghursta,*, Hirokazu Katob, Ivan Poupyrevc
a
Human Interface Technology Laboratory, University of Washington, Box 352-142, Fluke Hall, Mason Road, Seattle,
WA 98195, USA
b
Faculty of Information Sciences, Hiroshima City University, 3-4-1 Ozuka-Higashi, Asaminami-ku, Hiroshima 731-3194, Japan
c
Interaction Laboratory, Sony CSL, 3-14-13 Higashi-Gotanda, Shinagawa-ku, Tokyo 141-0022, Japan

Abstract

The MagicBook is a Mixed Reality interface that uses a real book to seamlessly transport users between Reality and
Virtuality. A vision-based tracking method is used to overlay virtual models on real book pages, creating an Augmented
Reality (AR) scene. When users see an AR scene they are interested in they can fly inside it and experience it as an
immersive Virtual Reality (VR). The interface also supports multi-scale collaboration, allowing multiple users to
experience the same virtual environment either from an egocentric or an exocentric perspective. In this paper we
describe the MagicBook prototype, potential applications and user feedback. r 2001 Elsevier Science Ltd. All rights
reserved.

Keywords: Augmented reality; CSCW; Mixed reality; Collaborative virtual environments

1. Introduction generated (Fig. 1). On this Reality–Virtuality line,


Tangible User Interfaces lie far to the left, while
As computers become smaller and more powerful, immersive virtual environments are placed at the right-
researchers have been trying to produce a technology most extreme. In between are Augmented Reality
transparency that significantly enhances human–com- interfaces, where virtual imagery is added to the real
puter interaction. The goal is to make interacting with a world, and Augmented Virtuality interfaces, where the
computer as easy as interacting with the real world. real world content is brought into immersive virtual
There are several approaches for achieving this. In the scenes. Most current user interfaces can be placed at
field of Tangible User Interfaces [1], real world objects specific points along this line.
are used as interface widgets and the computer In addition to single user applications, many compu-
disappears into the physical workspace. In an immersive ter interfaces have been developed that explore colla-
Virtual Reality (VR) environment, the real world is boration in a purely physical setting, in an AR setting,
replaced entirely by computer-generated imagery and or in an immersive virtual world. For example, Wellner’s
the user is enveloped in the virtual space. Finally, DigitalDesk [3] and Brave’s work on the InTouch and
Augmented Reality (AR) blends elements of the real and PSyBench [4] interfaces show how physical objects can
virtual by superimposing virtual images on the real enhance both face-to-face and remote collaboration. In
world. this case, the real objects provide a common semantic
As Milgram points out [2], these types of computer representation as well as a tangible interface for the
interfaces can be placed along a continuum according to digital information space. Work on the DIVE project
how much of the users environment is computer [5], GreenSpace [6] and other fully immersive multi-
participant virtual environments have shown that
*Tel.: +1-206-543-5075; fax: +1-206-543-5380. collaborative work is also intuitive in completely virtual
E-mail addresses: [email protected] (M. Billing- surroundings. Users can freely move through the space,
hurst), [email protected] (H. Kato), [email protected] setting their own viewpoints and spatial relationships,
sony.co.jp (I. Poupyrev). while gesture, voice and graphical information can all be

0097-8493/01/$ - see front matter r 2001 Elsevier Science Ltd. All rights reserved.
PII: S 0 0 9 7 - 8 4 9 3 ( 0 1 ) 0 0 1 1 7 - 0
746 M. Billinghurst et al. / Computers & Graphics 25 (2001) 745–753

want to experience a virtual environment from different


viewpoints or scale then immersive VR may be the best
choice. However, if the collaborators want to have a
face-to-face discussion while viewing the virtual image
an AR interface may be best. Similarly, in a collabora-
Fig. 1. Milgram’s Reality–Virtuality continuum. tive session users may often want to switch between
talking with their remote collaborators, and the people
sitting next to them in the same location. Given that
communicated seamlessly between the participants. different degrees of immersion may be useful for
Finally, collaborative AR projects such as Studierstube different tasks and types of collaboration; an important
[7] and AR2 Hockey [8] allow multiple users to work in question is how to support seamless transitions between
both the real and virtual world, simultaneously, facil- the classification spaces.
itating computer supported collaborative work (CSCW) Several researchers have conducted work in this area.
in a seamless manner. AR interfaces are very conducive Kiyokawa et al. [11,12] explored the seamless transition
to real world collaboration because the groupware between an AR and immersive VR experience. They
support can be kept simple and left mostly to social developed a two-person shared AR interface for face-to-
protocols. face computer-aided design, but users could also change
Benford [9] classifies these collaborative interfaces their body scale and experience the virtual world
along two dimensions of Artificiality and Transporta- immersively. Once users began to decrease or increase
tion. Transportation is the degree to which users leave their body size the interface would transition them into
their local space and enter into a remote space, and an immersive environment. This ability of users to fly
Artificiality the degree to which a space is synthetic or into miniature virtual worlds and experience them
removed from the physical world. Fig. 2 shows the immersively was previously explored by Stoakley et al.
classification of typical collaborative interfaces. As can in the Worlds in Miniature (WIM) work [13]. They used
be seen, Milgram’s continuum can be viewed as the miniature worlds to help users navigate and interact
equivalent of Benford’s Artificiality dimension. Again, with immersive virtual environments at full-scale. The
most collaborative interfaces exist at a discrete location WIM interface explored the use of multiple perspectives
in this two-dimensional taxonomy. in a single user VR interface, while the CALVIN work
However, human activity often cannot be broken into of Leigh et al. [14] introduced multiple perspectives in a
discrete components and for many tasks users may collaborative VR environment. In CALVIN, users could
prefer to be able to easily switch between interfaces either be Mortals or Deities and view the VR world from
types, or co-located and remote collaboration. This is either an egocentric or exocentric view, respectively.
particularly true when viewing and interacting with CALVIN supported multi-scale collaborative between
three-dimensional (3D) graphical content. For example, participants so that deities would appear like giants to
even when using a traditional desktop modeling inter- mortals and vice versa.
face users will turn aside from the computer screen to The MagicBook interface builds on this earlier work
sketch with pencil and paper. As Kiyokawa et al. point and explores how a physical object can be used to
out, AR and immersive VR are complimentary and the smoothly transport users between Reality and Virtual-
type of interface should be chosen according to the ity, or between co-located and remote collaboration. It
nature of the task [10,11]. For example, if collaborators supports transitions along the entire Reality–Virtuality
continuum, not just within the medium of immersive
VR, and so cannot be placed as a discrete point on a
taxonomy scale. In the remainder of this article we
describe the MagicBook interface in more detail, the
technology involved, initial user reaction and potential
applications of the technology.

2. The MagicBook experience

The MagicBook experience uses normal books as the


main interface object. People can turn the pages of these
books, look at the pictures, and read the text without
any additional technology (Fig. 3a). However, if they
look at the book through an AR display they see 3D
Fig. 2. Benford’s classification of collaborative interfaces. virtual models appearing out of the pages (Fig. 3b). The
M. Billinghurst et al. / Computers & Graphics 25 (2001) 745–753 747

Fig. 3. Using the MagicBook to move between Reality and Virtual Reality.

models appear attached to the real page so users can a different side of the virtual models. Holding up the
see the AR scene from any perspective simply by AR display to the face to see an enhanced view is
moving themselves or the book. The models can be of similar to using reading glasses or a magnifying lens.
any size and are also animated, so the AR view is an Rather than using a mouse and keyboard based
enhanced version of a traditional 3D ‘‘pop-up’’ book. interface, users manipulate virtual models using real
Users can change the virtual models simply by turning physical objects and natural motions. Although the
the book pages and when they see a scene they graphical content is not real, it looks and behaves like
particularly like, they can fly into the page and a real object, increasing ease of use.
experience it as an immersive virtual environment
(Fig. 3c). In the VR view they are free to move about 2.1. Collaboration with the MagicBook
the scene at will and interact with the characters in
the story. Thus, users can experience the full Reality– Physical objects, AR interfaces and immersive VR
Virtuality continuum. experiences have different advantages and disadvantages
As can be seen, the MagicBook interface has a for supporting collaboration. As shown by Benford’s
number of important features: classification, there has been a proliferation of colla-
borative interfaces, but it has traditionally been difficult
1. The MagicBook removes the discontinuity that has to move between the shared spaces they create. For
traditionally existed between the real and virtual example, users in an immersive virtual environment are
worlds. VR is a very intuitive environment for separated from the physical world and cannot collabo-
viewing and interacting with computer graphics rate with users in the real environment. The MagicBook
content, but in a head mounted display (HMD) a supports all these types of interfaces and lets the user
person is separated from the real world and their move smoothly between them depending on the task at
usual tools, or collaborators. hand.
2. The MagicBook allows users to view graphical Real objects often serve as the focus for face-to-face
content from both egocentric and exocentric views, collaboration and in a similar way the MagicBook
so they can select the viewpoint appropriate for the interface can be used by multiple people at once. Several
task at hand. For example, an AR viewpoint readers can look at the same book and share the story
(exocentric view) may be perfect for viewing and together (Fig. 4a). If these people then pick up their AR
talking about a model, but immersive VR (egocentric displays they will each see the virtual models super-
view) is better for experiencing the model at different imposed over the book pages from their own viewpoint.
scales or from different viewpoints. Since they can see each other and the real world at the
3. The computer has become invisible and the user can same time as the virtual models, they can easily
interact with graphical content as easily as reading a communicate using normal face-to-face communication
book. This is because the MagicBook interface cues. All the users using the MagicBook interface have
metaphors are consistent with the form of the their own independent view of the content so any
physical objects used. Turning a book page to change number of people can view and interact with a virtual
virtual scenes is as natural as rotating the page to see model as easily as they could with a real object (Fig. 4b).
748 M. Billinghurst et al. / Computers & Graphics 25 (2001) 745–753

Fig. 4. (a) Collaboration in the real world, (b) Sharing an AR view.

Fig. 5. Collaboration in the MagicBook.

In this way the MagicBook technology moves virtual * As an Immersive Virtual Space: Users can fly into the
content from the screen into the real world, preserving virtual space together and see each other represented
the cues used in normal face-to-face conversation, and as virtual avatars in the story space.
providing a more intuitive technology for collabora-
tively viewing 3D virtual content. The interface also supports collaboration on multiple
Multiple users can also be immersed in the virtual scales. Users can fly inside the virtual scenes (an
scene where they will see each other represented as egocentric view) and see each other as virtual characters.
virtual characters in the story (Fig. 5a). More interest- A non-immersed user will also see the immersed users as
ingly, there may be situations where one or more users small virtual characters on the book pages (an exo-
are immersed in the virtual world, while others are centric view). This means that a group of collaborators
viewing the content as an AR scene. In this case the AR can share both egocentric and exocentric views of the
user will see an exocentric view of a miniature figure of same game or data set, leading to enhanced under-
the immersed user, moving as they move themselves standing.
about the immersive world (Fig. 5b). Naturally, in the
immersive world, users viewing the AR scene appear as
large virtual heads looking down from the sky. When 3. The MagicBook interface
users in the real world move, their virtual avatars move
accordingly. In this way people are always aware of The MagicBook interface has three main components;
where the other users of the interface are located and a hand held AR display (HHD), a computer, and one or
where their attention is focused. more physical books. The books look like any normal
Thus the MagicBook interface supports collaboration book and have no embedded technology, while the
on three levels: display is designed to be easily held in one hand and to
be as unencumbering as possible (Fig. 6).
Each user has their own hand held display and
* As a Physical Object: Similar to using a normal book, computer to generate an individual view of the scenes.
multiple users can read together. These computers are networked together for exchanging
* As an AR Object: Users with AR displays can see information about avatar positions and the virtual scene
virtual objects appearing on the pages of the book. each user is viewing. The HHD is a handle with a Sony
M. Billinghurst et al. / Computers & Graphics 25 (2001) 745–753 749

Fig. 6. Components of the MagicBook interface.

Glasstron PLM-A35 display mounted at the top, an Certain pictures have thick black borders surrounding
InterSense InterTrax [15] inertial tracker at the bottom, them and are used as tracking marks for a computer
a small color video camera on the front, and a switch vision based head tracking system. When the reader
and pressure pad embedded in the handle. The PLM- looks at these pictures through the HHD, computer
A35 is a low cost bioccular display with two LCD panels vision techniques are used to precisely calculate the
of 260  230 pixel resolution. camera position and orientation relative to the tracking
The camera output is connected to the computer mark. The head tracking uses the ARToolKit tracking
graphics workstation; computer graphics are overlaid on library, a freely available open-source software package,
video of the real world and resultant composite image which we have written for developing vision based AR
shown back in the Glasstron display. In this way users applications [16]. Fig. 7 summarizes how the ARToolKit
experience the real world as a video-mediated reality. tracking library works. Once the users head position is
One advantage of this is that the video frames that are known the workstation generates virtual images that
being seen in the display are exactly the same frames as appear precisely registered with the real pages. Our use
those drawn on by the graphics software. This means of 2D markers for AR tracking is similar to the
that the registration between the real and virtual objects CyberCode work presented by Rekimoto [17] and other
appears almost perfect because there is no apparent lag vision based tracking systems.
in the system. The video of the real world is actually When the users see an AR scene they wish to explore,
delayed until the system has completed rendering the 3D flicking the switch on the handle will fly them smoothly
graphics. On a mid range PC (866 MHz Pentium III) into the scene, transitioning them into the immersive VR
with a virtual scene of less than 10,000 polygons we can environment. In the VR scene, users can no longer see
maintain a refresh rate of 30 frames per second. This is the real world and so the head tracking is changed from
fast enough that users perceive very little delay in the the computer vision module to the InterTrax inertial
video of the real world and the virtual objects appear orientation tracker. The output from the InterTrax
stuck to the real book pages. inertial compass is used to set the head orientation in the
Although commercially available hardware was used, virtual scene. The InterTrax provides three-degrees of
the ‘‘Opera glass’’ form factor of the hand held display freedom orientation information with a high accuracy
was deliberately designed to encourage seamless transis- and very little latency. Readers can look around the
tion between Reality and Virtual Reality. Users can look scene in any direction and by pushing the pressure pad
through the display to see AR and VR content, but can on the handle they can fly in the direction they are
instantaneously return to viewing the real world simply looking. The harder they push the faster they fly. To
by moving the display from in front of their eyes. The return to the real world users simply need to flick the
hand held display is far less obtrusive and easy to switch again. The pressure pad and switch are both
remove than any head worn display, encouraging people connected to a TNG interface box [18] that converts
to freely transition along the Reality–Virtuality con- their output to a single RS-232 serial data signal.
tinuum. It is also easy to share, enabling several people The MagicBook application is also a client-server
to try a single display unit and see the same content. networked application. Each of the user computers are
The books used in the MagicBook interface are networked together for exchanging information about
normal books with text and pictures on each page. avatar positions and the virtual scene that each user is
750 M. Billinghurst et al. / Computers & Graphics 25 (2001) 745–753

Fig. 7. The ARToolKit tracking process.

viewing. When users are immersed in the virtual ence that allows the reader to ski Mt. St. Helens. These
environment or are viewing the AR scenes, their position applications explore new literary ground where the
and orientation are broadcast using TCP/IP code to a reader can actually become part of the story and where
central server application. The server application then the author must consider issues of interactivity and
re-broadcasts this information to each of the networked immersion.
computers and the MagicBook graphical client code. The MagicBook technology has also strong applica-
This is used to place virtual avatars of people that are tion potential for scientific visualization. We have begun
viewing the same scene, so users can collaboratively exploring using this technology for viewing geo-spatial
explore the virtual content. Since each of the client models. Fig. 8 shows views of typical oilfield seismic
applications contain a complete copy of the graphics data superimposed over a tracking card. Currently,
code, only a very small amount of position information petroleum companies deploy expensive projection screen
needs to be exchanged. Thus MagicBook applications based visualization centers around the world. The
can potentially support dozens of users. There is also no tracking systems used in the MagicBook interface are
need for users to be physically co-located. The virtual completely sourceless and so potentially mobile. In the
avatars can be controlled by users in the same location near future it will be possible to run the MagicBook
or remote from each other. So the MagicBook software from a laptop computer and so support a
technology supports both face-to-face and remote radically new way of presenting visualization data in a
collaboration. field.
One of the more interesting applications we have
3.1. MagicBook applications developed is an educational textbook designed to teach
architects how to build Gerrit Rietveld’s famous Red
To encourage exploration in a number of different and Blue Chair (Fig. 9). After a brief introduction to
application areas we have developed the MagicBook as Rietveld’s philosophy and construction techniques, the
a generic platform that can be used to show almost any readers are treated to a step-by-step instruction guide to
VRML content. VRML is a standard file format for 3D building the chair. On each page is a 2D picture of the
computer graphics. We use an open source VRML current stage of the chair construction. When readers
rendering library called libVRML97 [19] that is based on look at this page in their hand held displays, they see a
the OpenGL low-level graphics library. Since VRML is 3D model of the partially completed chair popping out
exported by most 3D modeling packages, it is very easy of page. On the final page they see a virtual model of the
for content developers to build their own MagicBook completed chair that they can fly into and see life-sized.
applications. Once the 3D content has been developed, it Being able to see the chair from any angle during the
is simple to make the physical book pages and the construction process as well as a life-sized model at the
configuration files to load the correct content. end is a powerful teaching tool.
This ease of development has resulted in the produc-
tion of nearly a dozen books in a variety of application 3.4. User feedback
domains. Among others, we have a Japanese children’s
story that involves the reader in a treasure hunt, a The MagicBook software was first shown at the
version of the Humpty Dumpty tale, a World War One Siggraph 2000 conference where over 2500 people tried
History book, and a science fiction snowboard experi- the books in the course of a week. Siggraph is a
M. Billinghurst et al. / Computers & Graphics 25 (2001) 745–753 751

Fig. 8. Seismic data on a tracking marker.

Fig. 9. Stages in building Gerrit Rietveld’s red and blue chair.

demanding environment to display an interactive easy’’ and 7 ‘‘very easy’’. Table 1 shows the average
experience because attendees typically have only responses while Figs. 10 and 11 show the complete data
few minutes and need to be able to master the sets.
technology immediately. Although we did not have Using a two tailed student’s-t-test we found that the
time for a rigorous user study, 54 of these people filled answers to question one were significantly higher than
out a simple survey and were interviewed about their the expected mean of 4.0 (t ¼ 14:43; df=53, po0:001).
experience. This shows that users overwelmingly felt that they could
Feedback was very positive. People were able to use easily transition between the real and virtual worlds.
the interface with minimal training, they enjoyed the However, with question two the user responses were
hand held displays, being able to view different AR signficantly less than the expected mean (t ¼ 2:77;
scenes, and fly into the immersive VR worlds. Users df=53, po0:01), showing they thought it was not as
felt that the interface was easy and ituitive to use. They easy to collaborate with each other. This was probably
were given two questions ‘‘Q1: How easily could you due to some of the people trying the books by
move between the real and virtual worlds?’’, and ‘‘Q2: themselves, or when using it with another person not
How easy was it to collaborate with others?’’, and asked being aware of the avatars in the scene. In order for
to respond on a scale of 1–7, where 1 was ‘‘not very people to see each other as avatars they needed to be
752 M. Billinghurst et al. / Computers & Graphics 25 (2001) 745–753

Table 1 backwards. In the future we will explore different


User feedback navigataional metaphors.
Users also thought the realism and complexity of the
Question Average Std. Dev. Std. error
graphics content could be improved. The ability to
Q1: Ease of transition 5.87 0.95 0.13 render and display complex scenes is a function of both
Q2: Ease of collaboration 3.35 1.71 0.23 the graphics cards that we were using and the hand held
display properties. The current trend for rapid improve-
ment in both graphics card performance and head
mounted display resolution should remove this concern.
Interactivity is also limited in the current generation
of the MagicBook. It is a compelling experience to be
able to view and fly inside virtual scenes, but many
applications require interaction with the virtual content
that goes beyond simple navigation. For example, in
architecture application users should be able to select
and layout virtual furniture in the scenes that they are
exploring. We are currently developing new metaphors
based on tangible interaction techniques that could be
applied in a MagicBook interface.
Another limitation is the use of a single marker for
tracking by the computer vision based tracking system.
Fig. 10. How easy was it to move between Reality and Virtual If users happened to occlude part of the tracking pattern
Reality? (7=very easy). then the AR content would abruptly disappear. Re-
cently, we have developed a multi-marker tracking
method that uses sets of patterns [10]. Users can cover
up one or more of these patterns without halting the AR
tracking. We are in the process of incorporating this
approach into the next generation of MagicBook
interface.
Finally, more rigorous user studies need to be
conducted to investigate how collaboration in this
seamless interface differs from collaboration with more
traditional interfaces. We need to explore how this
interface affects communication and collaboration
patterns and whether it forces users to change the way
they would normally interact in a face-to-face setting.
Fig. 11. How easy was it to collaborate? (7=very easy).
There are also unanswered questions in terms of what
interface tools are needed to support multi-scale
collaboration, and how to incorporate both face-to-face
immersed in the same virtual scene at the same time, and remote collaborators. Our preliminary user feed-
which happened rarely. back indicates that more explicit collaboration cues may
be required for users to be aware of their collaborators
when immersed in the virtual scenes or viewing AR
4. Future improvements content.

Although users felt that they could easily transition


between the real and virtual worlds there were also a 5. Conclusions
number of shortcoming with the interface that they
identified. Many people found it frustrating that they As computers become more ubiquitous and invisible
could not move backwards in the virtual worlds. We there is a need for new interfaces that blur the line
modeled movement in the immersive world after move- between Reality and VR. This can only be achieved by
ment in the real world and assumed that users would the use of Mixed Reality interfaces that span the
rarely want to move backwards, since people rarely walk Reality–Virtuality continuum. The MagicBook is an
backwards. However, it seems that users expected more early attempt at a transitional Mixed Reality interface
of a video game metaphor and a majority of people for viewing and interacting with spatial datasets. The
immersed in the VR scenes asked how they could fly MagicBook allows users to move between Reality and
M. Billinghurst et al. / Computers & Graphics 25 (2001) 745–753 753

Virtual Reality at the flick of a switch and supports [6] Mandeville J, Davidson J, Campbell D, Dahl A, Schwartz
collaboration on multiple levels. Although the Magic- P, Furness T. A shared virtual environment for architec-
Book facilitates viewing of sophisticated computer tural design review. Proceedings of the CVE ’96
graphics content, the computer is invisible. Rather than Workshop, Nottingham, Great Britain, 19–20 September,
using a mouse or keyboard, interaction is focused 1996.
[7] Schmalstieg D, Fuhrmann A, Szalav!ari Zs, Gervautz M.
around a real book and a tangible interface that makes
StudierstubeFan environment for collaboration in
it very intuitive. augmented reality. Proceedings of the Collaborative
Initial user feedback has been very positive and even Virtual Environments ’96, and Virtual Reality System-
complete novices feel that they can use the interface and sFDevelopment and Applications, vol. 3, no. 1, 1996.
become part of the virtual scenes. However, we are p. 37–49.
continuing to improve the interface. In the future we [8] Ohshima T, Sato K, Yamamoto H, Tamura H. AR2Hock-
plan on exploring more intuitive ways for users to ey: a case study of collaborative augmented reality.
navigate through and interact with the virtual models. Proceedings of VRAIS ’98. Los Alamitos: IEEE Press,
We are also working on ways of integrating the 1998. p. 268–95.
MagicBook approach into an environment with projec- [9] Benford S, Greenhalg C, Reynard G, Briwn C, Koleva B.
Understanding and constructing shared spaces with mixed
tive displays and so allow seamless transition between
reality boundaries. ACM Transactions on Computer–
2D and 3D views of a data set in a traditional office Human Interaction (ToCHI). New York: ACM Press,
setting. 5(3), September 1998. p. 185–223.
For more information about the MagicBook project [10] Kato H, Billinghurst M, Poupyrev I, Imamoto K,
and to download a free version of the ARToolKit Tachibana K. Virtual object manipulation on a table-
software please visit http://www.hitl.washington.edu/ top AR environment. Proceedings of the Third Interna-
magicbook/. tional Symposium on Augmented Reality (ISAR 2000),
Munich, Germany. New York: IEEE Press, 5–6 October,
2000.
Acknowledgements [11] Kiyokawa K, Iwasa H, Takemura H, Yokoya N.
Collaborative immersive workspace through a shared
The authors would like to thank ATR MIC Labs for augmented environment. Proceedings of the International
Society for Optical Engineering ’98 (SPIE ’98), Boston,
their continued support, Keiko Nakao, Susan Campbell,
vol. 3517, 1998. p. 2–13.
and Dace Campbell for making the models and books
[12] Kiyokawa K, Takemura H, Yokoya N. SeamlessDesign: a
shown, and Dr. Tom Furness III for creating a face-to-face collaborative virtual/augmented environment
stimulating environment to work in. for rapid prototyping of geometrically constrained 3-D
objects. Proceedings of the IEEE International Conference
on Multimedia Computing and Systems ’99 (ICMCS ’99),
References Florence, vol. 2, 1999. p. 447–53.
[13] Stoakley R, Conway M, Pausch R. Virtual Reality on a
[1] Ishii H, Ullmer B. Tangible bits: towards seamless WIM: interactive worlds in miniature. Proceedings of the
interfaces between people, bits and atoms. Proceedings of CHI ’95. New York: ACM Press, 1995.
the CHI 97, Atlanta, Georgia, USA. New York: ACM [14] Leigh J, Johnson A, Vasilakis C, DeFanti T. Multi-
Press, 1997. p. 234–41. perspective collaborative design in persistent networked
[2] Milgram P, Kishino F. A taxonomy of mixed reality visual virtual environments. Proceedings of the IEEE Virtual
displays [special issue on networked reality]. IECE Reality Annual International Symposium ’96. Santa Clara,
Transactions on Information and Systems 1994; California, 30 March–3 April, 1996. p. 253–60.
E77-D(12):1321–9. [15] InterSense company websiteFhttp://www.isense.com
[3] Wellner P. Interactions with paper on the DigitalDesk. [16] ARToolKit websiteFhttp://www.hitl.washington.edu/re-
Communications of the ACM 1993;36(7):87–96. search/shared space/download/
[4] Brave S, Ishii H, Dahley A. Tangible interfaces for remote [17] Rekimoto J, Ayatsuka Y. CyberCode: designing Augmen-
collaboration and communication. Proceedings of the ted Reality environments with visual tags, designing
CSCW 98, Seattle, Washington. New York: ACM Press, augmented reality environments (DARE 2000), 2000.
November 1998. p. 169–78. [18] TNG 3B Interface Box available from Mindtel
[5] Carlson C, Hagsand O. DIVEFa platform for multi-user LtdFhttp://www.mindtel.com/
virtual environments. Computers and Graphics. 1993; [19] OpenVRML websiteFhttp://www.openvrml.org/
17(6):663–9.

You might also like