An Engine Enabling Location-Based Mobile Augmented Reality Applications
An Engine Enabling Location-Based Mobile Augmented Reality Applications
1 Introduction
Daily business routines increasingly require mobile access to information sys-
tems, while providing a desktop-like feeling of mobile applications to the users.
However, the design and implementation of mobile applications constitutes a
challenging task [1, 2]. Amongst others, developers must cope with limited phys-
ical resources of smart mobile devices (e.g., limited battery capacity or limited
screen size) as well as non-predictable user behaviour (e.g., mindless instant
shutdowns). Moreover, mobile devices provide advanced technical capabilities
the mobile applications may use, including motion sensors, a GPS sensor, and
a powerful camera system. On the one hand, these capabilities allow for new
kinds of business applications. On the other, the design and implementation of
such mobile applications is challenging. In particular, integrating sensors and
utilizing the data recorded by them constitute a non-trivial task when taking
requirements like robustness into and scalability into account as well.
Furthermore, mobile business applications need to be developed for various
mobile operating systems (e.g., iOS and Android) in order to allow for their
widespread use. Hence, developers of mobile business applications must not only
cope with the above mentioned challenges, but also with the heterogeneity of
existing mobile operating systems, while at the same time fully utilizing their
technical capabilities. In particular, if the same functions shall be provided on
2 Schickler, Pryss, Schobel, Reichert
In order to enrich the image captured by the camera of the smart mobile de-
vice with virtual information about POIs in the surrounding, basic concepts
enabling location-based calculations need to be developed.
An ecient and reliable technique for calculating the distance between two
positions is required (e.g., based on data of the GPS sensor in the context
of location-based outdoor scenarios).
Various sensors of the smart mobile device must be queried correctly in order
to determine the attitude and position of the smart mobile device.
The angle of view of the smart mobile device's camera lens must be calculated
to display the virtual objects on the respective position of the camera view.
1.2 Contribution
In the context of AREA, we developed various concepts for coping with the lim-
ited resources of a smart mobile device, while realizing advanced features with
respect to mobile augmented reality at the same time. In this paper, we present a
1
AREA stands for Augmented R eality E ngine Application. A video demonstrating
AREA can be viewed at: http://vimeo.com/channels/434999/63655894. Further in-
formation can be found at: http://www.area-project.info
Location-based Mobile Augmented Reality Applications 3
2 AREA Approach
The basic concept realized in AREA is the locationView. The points of in-
terest inside the camera's eld of view are displayed on it, having a size of
width2 + height2 pixels. The locationView is placed centrally on the screen of
p
1
Wikitude (http://www.wikitude.com)
4 Schickler, Pryss, Schobel, Reichert
The third reason for using the presented locationView concept concerns per-
formance. When the display shall be redrawn, the POIs already drawn on the
locationView can be easily queried and reused. Instead of rst clearing the entire
screen and afterwards re-initializing and redrawing already visible POIs, POIs
that shall remain visible need not to be redrawn. Finally, POIs located outside
the eld of view after a rotation are deleted from it, whereas POIs that emerge
inside the eld of view are initialized.
Fig. 2 sketches the basic algorithm used for realizing this locationView 2 .
2.2 Architecture
The AREA architecture has been designed with the goal to easily exchange and
extend its components. The design comprises four main modules organized in a
multi-tier architecture and complying with the Model View Controller pattern
(cf. Fig. 3). Lower tiers oer their services and functions through interfaces to
upper tiers. In particular, the tier
2 (cf. Fig. 3) will be described in detail in
Sect. 3 when discussing the dierences regarding the development of AREA on
iOS and Android respectively. Based on this architectural design, modularity
can be ensured; i.e., both data management and other elements (e.g., POIs) can
be customized and easily extended on demand. Furthermore, the compact design
of AREA enables us to build new mobile business applications based on it as
well as to easily integrate it with existing applications.
The tier
3 , the Model, provides modules and functions to exchange POIs. In
this context, we use both an XML- and a JSON -based interface to collect and
parse POIs. In turn, these POIs are stored in a global database. Note that we do
not rely on the ARML schema [9], but use a proprietary XML schema instead.
In particular, we will be able to extend our XML-based format in the context of
future research on AREA. Finally, the JSON interface uses a light-weight, easy
to understand and extendable format developers are familiar with.
The next tier
2 , the Controller, consists of two main modules. The Sensor
Controller is responsible for culling the sensors needed to determine the de-
vice's location and orientation. The sensors to be culled include the GPS sensor,
accelerometer, and compass sensor. The GPS sensor is used to determine the
position of the device. Since we currently focus on location-based outdoor sce-
narios, GPS coordinates are predominantly used. In future work, we will consider
indoor scenarios as well. Note that AREA's architecture has been designed to
easily change the way coordinates will be obtained. Using the GPS coordinates
and its corresponding altitude, we can calculate the distance between mobile de-
vice and POI, the horizontal bearing, and the vertical bearing. The latter is used
to display a POI higher or lower on the screen, depending on its own altitude.
In turn, the accelerometer provides data for determining the current rotation
of the device, i.e., the orientation of the device (landscape, portrait, or any ori-
entation inbetween) (cf. Fig. 1). Since the accelerometer is used to determine
the vertical viewing direction, we need the compass data of the mobile device
2
More technical details can be found in a technical report [8]
6 Schickler, Pryss, Schobel, Reichert
to determine the horizontal viewing direction of the user as well. Based on the
vertical and horizontal viewing directions, we are able to calculate the direction
of the eld of view as well as its boundaries according to the camera angle of
view of the device. The Point of Interest Controller (cf. Fig. 3) uses data of the
Sensor Controller in order to determine whether a POI lies inside the vertical
and horizontal eld of view. Furthermore, for each POI it calculates its position
on the screen taking the current eld of view and the camera angle of view into
account.
The tier
1 , the View, consists of various user interface elements, e.g., the
locationView, the Camera View, and the specic view of a POI (i.e., the Point
of Interest View ). Thereby, the Camera View displays the data captured by the
device's camera. Right on top of the Camera View, the locationView is placed. It
displays POIs located inside the current eld of view at their specic positions
as calculated by the Point of Interest Controller. To rotate the locationView,
the interface of the Sensor Controller is used. The latter allows determining
the orientation of the device. Furthermore, a radar can be used to indicate the
direction in which invisible POIs are located (Fig. 5 shows an example of the
radar). Finally, AREA uses libraries of the mobile development frameworks,
which provide access to core functionality of the underlying operating system,
e.g., sensors and screen drawing functions (cf. Native Frameworks in Fig. 3).
The iOS version of AREA has been implemented using the programming lan-
guage Objective-C and iOS Version 7.0 on Apple iPhone 4S. Furthermore, for
developing AREA, the Xcode environment (Version 5) has been used.
Sensor Controller The Sensor Controller is responsible for culling the needed
sensors in order to correctly position the POIs on the screen of the smart mo-
bile device. To achieve this, iOS provides the CoreMotion and CoreLocation
frameworks. We use the latter framework to get notied about changes of the
location as well as compass heading. Since we want to be informed about every
change of the compass heading, we adjusted the heading lter of the CoreLoca-
tion framework accordingly. When the framework sends us new heading data,
its data structure contains a real heading as well as a magnetic one as oats.
The real heading complies to the geographic north pole, whereas the magnetic
heading refers to the magnetic north pole. Since our coordinates correspond to
GPS coordinates, we use the real heading data structure. Note that the values
of the heading will become (very) inaccurate and oscillate when the device is
moved. To cope with this, we apply a lowpass lter to the heading in order to
obtain smooth and accurate values, which can then be used to position the POIs
on the screen [12]. Similar to the heading, we can adjust how often we want to
be informed about location changes. On one hand, we want to get notied about
all relevant location changes; on the other, every change requires a recalculation
of the surrounding POIs. Thus, we deciced to get notied only if a dierence of
at least 10 meters occurs between the old and the new location. Note that this
is generally acceptable for the kind of applications we consider (cf. Section 4.1).
Finally, the data structure representing a location contains GPS coordinates of
2
Phonegap (http://phonegap.com)
8 Schickler, Pryss, Schobel, Reichert
the device in degrees north and degrees east as decimal values, the altitude in
meters, and a time stamp.
In turn, the CoreMotion framework provides interfaces to cull the accelero-
meter. The latter is used to determine the current rotation of the device as well
as the direction it is pointing to (e.g., upwards or downwards). As opposed to
location and heading data, accelerometer data is not automatically pushed to the
application by the CoreMotion framework of iOS. Therefore, we had to dene
an application loop polling this data every 901
seconds. On one hand, this rate
is fast enough to obtain smooth values; on the other, it is low enough to save
battery power.
Basically, the data delivered by the accelerometer consists of three values;
i.e., the accelerations in x-, y-, and z-direction. In general, gravity is required for
calculating the direction a device is pointing to. However, we cannot obtain the
gravity directly from the acceleration data, but must additionally apply a lowpass
lter to the x-, y-, and z-direction values; i.e., the three values are averaged
and ltered. In order to obtain the vertical heading as well as the rotation of
the device, we then apply the following steps: First, by calculating arcsin(z)
we obtain a value between 90 describing the vertical heading. Second, by
calculating arctan 2(y, x), we obtain a value between 0 and 359 , describing
the device's degree of the rotation.
Since we need to consider all possible orientations of the smart mobile de-
vice, we must adjust the compass data accordingly. For example, assume that we
hold the device in portrait mode in front of us towards North. Then, the com-
pass data we obtain indicate that we are viewing in Northern direction. As soon
as we rotate the device, however, compass data will change although our view
still goes to Northern direction. Reason for this is that the reference point of the
compass corresponds to the upper end of the device. To cope with this issue,
we must adjust compass data using the rotation calculation presented above.
When subtracting the rotation value (i.e., 0 and 359 ) from compass data, we
obtain the desired compass value, while still viewing in Northern direction after
rotating the device.
inside the radius. Due to space limitations, we do not describe these calculations
in detail, but refer interested readers to a technical report [8].
As explained in [8], the vertical bearing can be calculated based on the alti-
tudes of both the POIs and the smart mobile device (the latter can be determined
from the current GPS coordinates). The horizontal bearing, in turn, can be com-
puted with the Haversine formula by applying it to the GPS coordinates of the
POI and the smart mobile device. In order to avoid costly recalculations of these
surrounding POIs in case the GPS coordinates do not change (i.e., movings are
within 10m), we buer POI data inside the controller implementation.
The heading and accelerometer data need to be processed when a notication
from the Sensor Controller is obtained (Section 3.1.1). Then it needs to be
determined which POIs are located inside the vertical and horizontal eld of
view, and at which positions they shall be displayed on the locationView. Recall
that the locationView extends the actual eld of view to a larger, orientation-
independent eld of view (cf. Fig. 4a). First, the boundaries of the locationView
need to be determined based on the available sensor data. In this context, the
heading data provides the information required to determine the direction the
device is pointing. The left boundary of the locationView can be calculated by
determining the horizontal heading and decreasing it by the half of the maximal
angle of view (cf. Fig. 4a). In turn, the right boundary is calculated by adding
half of the maximal angle of view to the current heading. Since POIs also have
a vertical heading, a vertical eld of view must be calculated as well. This can
be accomplished analogously to the calculation of the horizontal eld of view,
except that the data of the vertical heading is required instead. Finally, we
obtain a directed, orientation-independent eld of view bounded by left, right,
top, and bottom values. Then we use the vertical and horizontal bearings of a
POI to determine whether it lies inside the locationView (i.e., inside the eld of
view). Since we use the locationView concept, we do not have to deal with the
rotation of the device, i.e., we can normalize calculations to portrait mode since
the rotation itself is handled by the locationView.
The camera view can be created and displayed by applying the native AV-
Foundation framework. Using the screen size of the device, which can be deter-
mined at run time, the locationView can be initialized and placed centrally on
top of the camera view. As soon as the Point of Interest Controller has nished
its calculations (i.e., it has determined the positions of the POIs), it noties the
View Controller that organizes the view components. The View Controller then
receives the POIs and places them on the locationView. Recall that in case of a
device rotation, only the locationView must be rotated. As a consequence, the
actual visible eld of view changes accordingly. Therefore, the Point of Interest
Controller sends the rotation of the device calculated by the Sensor Controller
to the View Controller, together with the POIs. Thus, we can adjust the eld
of view by simply counterrotating the locationView using the given angle. The
user will only see those POIs on his screen, which are inside the actual eld of
view; then other POIs will be hidden after the rotation, i.e., they will be moved
out of the screen (cf. Fig. 1). Related implementation issues are discussed in [8].
10 Schickler, Pryss, Schobel, Reichert
Sensor Controller For implementing the Sensor Controller, the packages an-
droid.location and android.hardware can be used. The location package provides
functions to retrieve the current GPS coordinate and altitude of the respective
device; hence, it is similar to the corresponding iOS package. Additionally, the
Android location package allows retrieving an approximate position of the device
based on network triangulation. Particularly, if no GPS signal is available, the
latter approach can be applied. As a drawback, however, no information about
the current altitude of the device can be determined in this case. In turn, the
hardware package provides functions to get notied about the current magnetic
eld and accelerometer. The latter corresponds to the one of iOS. It is used
to calculate the rotation of the device. However, the heading is calculated in a
dierent way compared to iOS. Instead of obtaining it with the location service,
it must be determined manually. Generally, the heading depends on the rota-
tion of the device and the magnetic eld. Therefore, we create a rotation matrix
using the data of the magnetic eld (i.e., a vector with three dimensions) and
the rotation based on the accelerometer data. Since the heading data depends
on the accelerometer as well as the magnetic eld, it is rather inaccurate. More
precisely, the calculated heading is strongly oscillating. Hence, we apply a low-
pass lter to mitigate this oscillation. Note that this lowpass lter diers from
the one used in Section 3.1.1 when calculating the gravity.
As soon as other magnetic devices are located nearby the actual mobile de-
vice, the heading is distorted. To notify the user about the presence of such a
disturbed magnetic eld, which leads to false heading values, we apply functions
of the hardware package. Another dierence between iOS and Android concerns
the way the required data can be obtained. Regarding iOS, location-based data
is pushed, whereas sensor data must be polled. As opposed to iOS, on Android
all data is pushed by the framework, i.e., application programmers rely on An-
droid internal loops and trust the up-to-dateness of the data provided. Note that
such subtle dierences between mobile operating systems and their development
frameworks should be well understood by the developers of advanced mobile
business applications.
way. Thus, when the Point of Interest Controller receives sensor data from the
Sensor Controller, the x- and y-coordinates of the POIs must be determined in a
dierent way. Instead of placing the POIs independently of the device's current
rotation, we utilize the degree of rotation provided by the Sensor Controller.
Following this, the POIs are rotated around the centre of the locationView and
they are also rotated about their centres (cf. Fig. 4b). Using this approach, we
can still add all POIs to the eld of view of the locationView. Finally, when
rotating the POIs, they will automatically leave the device's actual eld of view.
(a) Illustration of the new maximal (b) Rotation of a POI and eld of view.
angle view and the real one.
Realizing the locationView The developed locationView and its specic fea-
tures dier between the Android and iOS implementations of AREA. Regarding
the iOS implementation, we are able to realize the locationView concept as
described in Section 2.1. On the Android operating system, however, not all
features of this concept have worked properly. More precisely, extending the de-
vice's current eld of view to the bigger size of the locationView worked well.
Furthermore, determining whether a POI lies inside the eld of view, indepen-
dent of the current rotation of the device, worked well. By contrast, rotating the
locationView with its POIs to adjust the visible eld of view as well as moving
invisible POIs out of the screen has not worked on Android as expected. As a
12 Schickler, Pryss, Schobel, Reichert
particular challenge we faced in this context, a simple view on Android must not
contain any child views. Therefore, on Android we had to use the layout concept
for realizing the described locationView. However, simply rotating a layout does
not work on all Android devices. For example, on a Nexus 4 device this worked
well by implementing the algorithm in exactly the same way as on iOS. In turn,
on a Nexus 5 device this led to failures regarding the redraw process. When ro-
tating the layout on Nexus 5, the locationView is clipped by the camera surface
view, which is located behind our locationView. As a consequence, to ensure that
AREA is compatible with a wider set of Android devices, running Android 4.0
(or higher version), we applied the adjustments described in Section 4.2.
Accessing Sensors The use of sensors on the two mobile operating systems
diers. This concerns the access to the sensors as well as their preciseness and
reliability. Regarding iOS, the location sensor provides both GPS coordinates
and compass heading. This data is pushed to the application by an underlying
iOS service. In turn, on Android, the location sensor solely provides data of
the current location. Furthermore, this data must be polled by the application.
Heading data, in turn, is calculated through the fusion of several motion sensors,
including the accelerometer and magnetometer.
The accelerometer is used on both platforms to determine the current ori-
entation of the device. However, the preciseness of the provided data diers
signicantly. Compiling and running AREA on iOS 6 results in very reliable
compass data with an interval of one degree. Compiling and running AREA on
iOS 7, leads to dierent results compared to iOS 6. One one hand, iOS 7 allows
for a higher resolution of the data intervals provided by the framework due to the
use of oating point data instead of integers. On the other, delivered compass
data is partially unreliable. Furthermore, in the context of iOS 7 compass data
tend to oscillate within a certain interval when moving the device. Therefore, we
Location-based Mobile Augmented Reality Applications 13
4 Validation
This section gives insights into the development of business applications based
on AREA and the lessons we learned from this. AREA has been integrated with
several business applications. For example, the [13] application, which has been
realized based on AREA, can be used to provide residents and tourists of a
city with the opportunity to explore their surrounding by displaying points of
interests (e.g., public buildings, parks, and event locations). When implementing
respective business applications based on AREA, one can take benet from the
modular design of AREA as well as its extensibility.
For developing LiveGuide, the following two steps were sucient: rst, the
appearance of the POIs was adapted to meet UI requirements of the customers.
Second, the AREA data model need to be adapted to an existing one. When
developing applications like LiveGuide, we gained profound practical insights
regarding the use of AREA.
Release Updates of Mobile Operating Systems Both the iOS and Android
mobile operating systems are frequently updated. In turn, respective updates
must be carefully considered when developing and deploying an advanced mo-
bile business application like AREA. Since the latter depends on the availability
of accurate sensor data, fundamental changes of the respective native libraries
might aect the proper execution of AREA. For example, consider the following
scenarios we needed to cope with in the context of an Android operating system
update (from Android 4.2 to 4.3). In Android 4.2, the sensor framework noties
AREA when measured data becomes unreliable. By contrast, with Android 4.3,
certain constants (e.g., SENSOR_STATUS_UNRELIABLE ) we had used be-
fore were no longer known on the respective devices. To deal with such an issue,
the respective constant had to be replaced by a listener (onAccuracyChanged ).
As another example consider the release of iOS 7, that led to a signicant
change of the look and feel of the entire user interface. In particular, some of the
user interface elements we customized in the deployed version of the LiveGuide
14 Schickler, Pryss, Schobel, Reichert
applications got hidden from one moment to the other or did not react to an user
interaction anymore. Altogether, adjusting mobile applications in the context of
operating system updates might cause considerable eorts.
5 Related Work
Only little work can be found dealing with the engineering of augmented
reality systems itself. As an exception, [20] validates existing augmented reality
browsers. However, neither software vendors nor academic approaches related
to augmented reality provide insights into the way a location-based mobile aug-
mented reality engine can be developed.
6 Summary
This paper gives insights into the development of the core framework of an
augmented reality engine for smart mobile devices. We further show how business
applications can be implemented based on the functions provided by this mobile
augmented reality engine. As discussed along selected implementation issues,
such an engine development is challenging.
First, a basic knowledge about mathematical calculations is required, e.g.,
formulas to calculate the distance and heading of points of interest on a sphere
in the context of outdoor scenarios. Furthermore, profound knowledge about the
various sensors of smart mobile devices is required from application developers.
Second, resource and energy consumption must be addressed. Since smart
mobile devices have limited resources and performance capabilities, the points
of interest should be displayed in an ecient way and without delay. Hence,
the calculations required to handle sensor data and screen drawing must be
implemented eciently. The latter is accomplished through the concept of lo-
cationView, that allows increasing the eld of view by reusing already drawn
points of interest. In particular, the increased size allows the AREA engine to
easily determine whether a point of interest is inside the locationView without
need to consider the current rotation of the smart mobile device. In addition, all
displayed points of interest can be easily rotated.
Third, we argue that an augmented reality engine like AREA must provide
a sucient degree of modularity to allow for a full and easy integration with
existing applications as well as to implement new applications on top of it.
Fourth, we have demonstrated how to integrate AREA in a real-world busi-
ness application (i.e., LiveGuide ) and utilize its functions in this context. The
respective application has been made available in the Apple App and Android
Google Play Stores showing a high robustness. Finally, we have given insights
into the dierences between Apple's and Google's mobile operating systems when
developing AREA.
Currently, AREA can only be applied in outdoor scenarios due to its de-
pendency on GPS. In future research AREA shall be extended to cover indoor
scenarios as well. In this context, we will consider Wi-Fi triangulation as well as
Bluetooth 4.0 beacons to be able to determine the indoor position of the device.
References
1. Geiger, P., Schickler, M., Pryss, R., Schobel, J., Reichert, M.: Location-based
mobile augmented reality applications: Challenges, examples, lessons learned. 10th
Int'l Conf on Web Inf Sys and Technologies (WEBIST 2014) (2014) 383394
16 Schickler, Pryss, Schobel, Reichert
2. Pryss, R., Mundbrod, N., Langer, D., Reichert, M.: Supporting medical ward
rounds through mobile task and process management. Information Systems and
e-Business Management (2014) 140
3. Frhlich, P., Simon, R., Baillie, L., Anegg, H.: Comparing conceptual designs
for mobile access to geo-spatial information. Proc 8th Conf on Human-computer
Interaction with Mobile Devices and Services (2006) 109112
4. Carmigniani, J., Furht, B., Anisetti, M., Ceravolo, P., Damiani, E., Ivkovic, M.:
Augmented reality technologies, systems and applications. Multimedia Tools and
Applications 51 (2011) 341377
5. Paucher, R., Turk, M.: Location-based augmented reality on mobile phones. IEEE
Conf Comp Vision and Pattern Recognition Workshops (2010) 916
6. Reitmayr, G., Schmalstieg, D.: Location based applications for mobile augmented
reality. Proc 4th Australasian Conf on User Interfaces (2003) 6573
7. Lee, R., Kitayama, D., Kwon, Y., Sumiya, K.: Interoperable augmented web brows-
ing for exploring virtual media in real space. Proc of the 2nd Int'l Workshop on
Location and the Web (2009)
8. Geiger, P., Pryss, R., Schickler, M., Reichert, M.: Engineering an advanced
location-based augmented reality engine for smart mobile devices. Technical Re-
port UIB-2013-09, University of Ulm, Germany (2013)
9. ARML: Augmented reality markup language. http://openarml.org/wikitude4.
html (2014) [Online; accessed 07/05/2014].
10. Corral, L., Sillitti, A., Succi, G.: Mobile multiplatform development: An experiment
for performance analysis. Procedia Computer Science 10 (2012) 736 743
11. Schobel, J., Schickler, M., Pryss, R., Nienhaus, H., Reichert, M.: Using vital sensors
in mobile healthcare business applications: Challenges, examples, lessons learned.
Int'l Conf on Web Information Systems and Technologies (2013) 509518
12. Kamenetsky, M.: Filtered audio demo. http://www.stanford.edu/~boyd/ee102/
conv_demo.pdf (2014) [Online; accessed 07/05/2014].
13. CMCityMedia: City liveguide. http://liveguide.de (2014) [Online; accessed
07/05/2014].
14. Feiner, S., MacIntyre, B., Hllerer, T., Webster, A.: A touring machine: Proto-
typing 3d mobile augmented reality systems for exploring the urban environment.
Personal Technologies 1 (1997) 208217
15. Kooper, R., MacIntyre, B.: Browsing the real-world wide web: Maintaining aware-
ness of virtual information in an AR information space. Int'l J Human-Comp
Interaction 16 (2003) 425446
16. Khri, M., Murphy, D.: Mara: Sensor based augmented reality system for mobile
imaging device. 5th IEEE and ACM Int'l Symposium on Mixed and Augmented
Reality (2006)
17. Wikitude: Wikitude. http://www.wikitude.com (2014) [Online; accessed
07/05/2014].
18. Layar: Layar. http://www.layar.com/ (2014) [Online; accessed 07/05/2014].
19. Junaio: Junaio. http://www.junaio.com/ (2014) [Online; accessed 07/05/2014].
20. Grubert, J., Langlotz, T., Grasset, R.: Augmented reality browser survey. Technical
report, University of Technology, Graz, Austria (2011)