2019 - Human-Computer Interaction On The Skin
2019 - Human-Computer Interaction On The Skin
The skin offers exciting possibilities for human–computer interaction by enabling new types of input and
feedback. We survey 42 research papers on interfaces that allow users to give input on their skin. Skin-based
interfaces have developed rapidly over the past 8 years but most work consists of individual prototypes,
with limited overview of possibilities or identification of research directions. The purpose of this article is to
synthesize what skin input is, which technologies can sense input on the skin, and how to give feedback to
the user. We discuss challenges for research in each of these areas.
CCS Concepts: • Human-centered computing → Interaction devices;
Additional Key Words and Phrases: Skin input, on-body interaction, tracking technologies
ACM Reference format:
Joanna Bergström and Kasper Hornbæk. 2019. Human–Computer Interaction on the Skin. ACM Comput. Surv.
52, 4, Article 77 (August 2019), 14 pages.
https://doi.org/10.1145/3332166
1 INTRODUCTION
Our skin is an attractive platform for user interfaces. The skin provides a large surface for input
that is always with us and that enables rich types of interaction. We can, for instance, navigate the
menu of a mobile phone by sliding a finger across invisible controls on the palm [11] or control a
computer by tapping shortcuts on the forearm [26]. User input can be estimated using cameras [4,
12, 42] or acoustic signals propagating on the skin [14, 26, 32].
The skin enables exciting possibilities for interaction and user interface design. First, the skin
enables new input types. It can be touched, grabbed, pulled, pressed, scratched, sheared, squeezed, 77
and twisted [53], and we easily relate meanings to these actions, such as equating a strong grab
with anger. Second, skin-based interfaces free us from carrying mobile devices in our hands and
extends their input areas to support off-screen input. Our skin surface is hundreds of times larger
than the touchscreen of an average mobile phone [19] and can be turned into an input device using
a watch or an armband worn on the body. Third, feeling input on the skin can help us achieve better
user experience and effectiveness than when us ing common input devices [19].
Since the widely cited paper on skin input by Harrison et al. [14] was published in 2010, re-
searchers have developed many novel technologies for skin-based interfaces. The pros and cons of
those technologies have not been systematically compared. Moreover, we understand little about
This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020
Research and Innovation Program (grant agreement 648785).
Authors’ address: J. Bergström and K. Hornbæk, University of Copenhagen, Department of Computer Science, Sigurdsgade
41, 1st fl, 2200 Copenhagen, Denmark; emails: {joanna, kash}@di.ku.dk.
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee
provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and
the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored.
Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires
prior specific permission and/or a fee. Request permissions from [email protected].
© 2019 Association for Computing Machinery.
0360-0300/2019/08-ART77 $15.00
https://doi.org/10.1145/3332166
ACM Computing Surveys, Vol. 52, No. 4, Article 77. Publication date: August 2019.
77:2 J. Bergström and K. Hornbæk
the benefits of skin input and how to lay out controls and give feedback on the skin. The aim
of this article is to step back and synthesize what we know about human–computer interaction
on the skin, and outline key challenges for research. We also aim to move beyond the oddities of
individual prototypes toward some general insights on new opportunities.
ACM Computing Surveys, Vol. 52, No. 4, Article 77. Publication date: August 2019.
Human–Computer Interaction on the Skin 77:3
Fig. 1. The four types of skin input. The area of skin contact can be varied, e.g., by touching with one to
multiple fingers, or by bringing some fingers or the whole palms together. The skin can be deformed, e.g., by
pressing, pushing, or pulling the skin with one finger, or by pinching with the index finger and the thumb.
Touch gestures include, e.g., drawing shapes, hand writing letters, or controlling a continuous slider. Discrete
touch input can be given by tapping, or by sliding the finger on the skin and selecting a target by lifting the
finger.
targets (Figure 1), similar to touching keys on a touchscreen. Another form of input for selecting
discrete targets is sliding a finger across the skin and selecting the target by lifting the finger or
double tapping (used in 13% of the studies). In a study by Gustafson et al. [11], participants used
sliding input to find menu items of a mobile phone on their palm. When sliding across an item
the participants heard the item name as audio feedback, and could select it using a double tap. Lin
et al. [26] examined how many targets blindfolded participants could distinguish on their forearm.
The participants were free to choose a strategy for selection. Three strategies emerged: tapping,
sliding, and jumping (i.e., tapping along the forearm until the selected location is reached). Most
studies (93%) used the index finger for tapping, and many (24%) also the thumb; a few studies
allowed using both. Sliding input was always performed with the index finger.
Touch gestures, such as flicking, swiping, panning, zooming, and drawing shapes were used as
input in 37% of the studies. These gestures are similar to those used with touchscreens. With touch
gestures the users can, for instance, input letters by drawing on their palm [4, 50]. Touch gestures
do not require users to input on an absolute location on the skin, but are recognized based on
patterns.
Varying the area of skin contact by touching the skin with multiple fingers or with the whole
palm was used as input in 30% of the studies. These inputs include tapping with one to four fingers
[51], grasping the forearm using some of the fingers and the thumb [32], and bringing fingers or
whole palms together to vary the area of skin contact between the hands [44]. The user can, for
instance, select multiple targets on fewer tap locations by varying the number of tapping fingers.
In about a third (33%) of the papers, users deformed the skin as input. The skin can, for instance,
be pressed [39], pushed [35], or pinched [34]. Deformation input can be used to select a point and
a magnitude on a slider [36], for expressing emotions [53], or for controlling 3D models with just
ACM Computing Surveys, Vol. 52, No. 4, Article 77. Publication date: August 2019.
77:4 J. Bergström and K. Hornbæk
one finger [39]. Only deformation of the skin introduced new types of input compared to on-device
touches.
ACM Computing Surveys, Vol. 52, No. 4, Article 77. Publication date: August 2019.
Human–Computer Interaction on the Skin 77:5
locations on the non-dominant hand and arm on their perceived ease and comfort. They found
that the perceived ease of input modalities depends on the location. The skin on the palm, for
instance, is difficult to deform (e.g., with twisting or pulling), yet it was the preferred location for
touch input.
ACM Computing Surveys, Vol. 52, No. 4, Article 77. Publication date: August 2019.
77:6 J. Bergström and K. Hornbæk
Fig. 2. The three types of sensing technologies. Optical sensing detects location well and can achieve large
input resolution, but reliable detection of skin contact is hard. Users need to maintain certain postures so
as to not obscure the line of sight to the interactive area on the skin. On-skin sensing is good in detecting
skin contact and has been shown to achieve a resolution of 80 touch targets, but requires more processing
of sensor data and is sensitive to user’s movement. Touch sensors are excellent in detecting if skin contact
occurs, and do not restrict movement, but have a binary resolution, and are hard to implement or invasive.
and can be attached on the skin outside the input area, leaving it uncovered. On-skin systems can
distinguish the size of the contact area on the skin [44, 51], tap locations [14, 32], or can imply
whether a tap occurred [6].
Touch sensors detect touch directly. They include capacitive touch sensors, piezo-electrical
sensors, and pressure sensors. Such sensors were used in 21% of the studies. Touch sensors can be
placed in three ways to enable direct sensing and touch on bare skin. First, small capacitive touch
sensors can be placed on a fingertip, while still leaving most of it uncovered [8, 38, 49]. Second,
the sensors can be placed behind appendages of the body, such as behind the ear lobe [27]. Third,
the touch sensors can be implanted and sense touch through the skin [16]. Capacitive sensors are
binary, but pressure sensors can allow detecting multiple levels of pressure through the skin [16].
ACM Computing Surveys, Vol. 52, No. 4, Article 77. Publication date: August 2019.
Human–Computer Interaction on the Skin 77:7
(e.g., the nail of the middle finger) touched with one finger or the whole palm and sliding down-
ward and upward [51]. The ultrasound sensing developed by Mujibiya et al. [32] obtained 86.24%
accuracy in distinguishing grasps with one to four fingers, and 94.72% accuracy on distinguishing
points on the palm and on the back of the hand.
Sensors have been combined to achieve better recognition rates of input. For example, optical
systems were combined with capacitive touch sensors [8, 37, 38, 49], proximity sensors [45], ac-
celerometers [42], and piezo-electrical sensors [20] to reliably detect touch or to trigger location
sensing. Combining optical sensing and touch sensors is beneficial because the former recognizes
the location of touch well, while the latter recognizes the occurrence of a touch well.
The resolution of optical sensing to track location of input is usually high. For example, the
Vicon motion tracking system can track the marker position with a 1 mm accuracy [49], and the
watch-based IR sensing by Lim et al. [24] was able to detect the robot finger location on the back of
the hand with the x and y directional accuracies of 7.02mm and 4.32mm. In contrast, with on-skin
sensing the ability to track multiple target locations depends on the signal features, and machine
learning and classification capabilities of the system. The Skinput system, for instance, used on-
skin sensing of acoustic signals and classified taps on 10 targets with 81.5% accuracy [14], and the
SkinTrack system reached mean distance error of 8.9mm on 80 targets. With touch sensors, the
number of targets depends on the number of sensors because the sensing is binary.
Only one system tracked both sides of the hand and forearm, although separately [32]. On-skin
sensing was used to track input on targets placed around the forearm or the wrist, on two rows
on opposite sides of the forearm, or as two single targets on both sides of the hand. Three studies
tracked one side of both the hand and the forearm together with on-skin sensing [14, 51, 55]. None
of the systems tracked input around the entire forearm and hand.
Movement of the user can cause noise to the signals that are tracked to recognize input, and
therefore hamper the performance of tracking systems. Touch sensors suffer the least from noise,
and with those, the users are free to move and change their body posture. In contrast, on-skin
sensing technologies are prone to noise caused by movements of the user because the tracked
signals propagate through the body, and muscle activation interferes with this.
Optical sensing places the most restrictions on the user’s mobility; maintaining line of sight
to the input area is necessary. Thus, most studies of optical sensing used fixed hand postures,
preventing, for instance, flexion of the wrist, which could bring the input surface of the hand
too close to the sensor, potentially confusing it with the finger. For example, the wrist joint was
restricted to a neutral pose to track taps on the hand with cameras and IR sensors attached to
a wristband [20, 24, 42], and the palm was kept flat and still by affixing it onto a table surface
[49] or to a cardboard equipped with markers [8]. These approaches also help to detect when the
fingertip touches the skin. Furthermore, most optical systems are sensitive to lighting conditions
and need to minimize light pollution from the environment. For example, an LED flash [42] or
infrared light emitters can help in highlighting the skin of the finger to distinguish it from other
surfaces reflecting light further away [24, 49].
To summarize, optical sensing may achieve the highest resolution, while touch sensors may
allow the most freedom to user’s posture and movement. Optical and on-skin sensing show the
potential for tracking many types of input, and on-skin and touch sensors can complement optical
sensing for more robust touch detection.
ACM Computing Surveys, Vol. 52, No. 4, Article 77. Publication date: August 2019.
77:8 J. Bergström and K. Hornbæk
sports paints, and notes [47]; in doing so the locations and types of such marks have strong effects
on how they are perceived; similarly, the location of touch on the body have personal and social
significance. Next, we discuss what this means for designing user interfaces for the skin.
ACM Computing Surveys, Vol. 52, No. 4, Article 77. Publication date: August 2019.
Human–Computer Interaction on the Skin 77:9
interface and displayed feedback (e.g., indicated target selection) directly on the skin where the
input was performed, allowing direct touch interaction similar to common touchscreens. Further
35% provided visual feedback on watch faces. Visual feedback was most frequently (62%) displayed
outside the body surface on external devices. Audio feedback was provided in 16% of the interfaces.
Surprisingly, none of the interfaces provided haptic feedback, and only one prototyped haptic
output by implanting an actuator [16].
Visual feedback is important for effective input. For example, Harrison et al. [14] found that
eyes-free tapping on the hand and forearm is on average 10.5% less accurate than tapping on
visual targets. Lissermann et al. [27] studied tapping on the ear, where no visual guidance was
available. Tapping on the ear lobe had an average accuracy of 80% on four, 64% on five, and 58%
on six targets. Gustafson et al. [11] found 19% faster performance in retrieving targets on the palm
by sliding input when participants were able to look at the palm compared to when they were
blindfolded.
Yet, the studies suggest that skin input is possible also when a user’s view of the hand is limited.
For example, the average typing speed on a QWERTY keyboard on the palm was 10.5 words per
minute [49], the key size needed for achieving over 90% touch accuracy on nine targets on the palm
side of the hand and fingers was 28mm [8], and the average accuracy for tapping five sections on
the forearm was 84% [26].
Feeling the inputting finger on the skin was suggested to help the users in finding targets
without visual feedback. For example, Gustafson et al. [11] examined the importance of such pas-
sive tactile feedback using a fake palm, and therefore prevented participants from feeling their
fingertip on the skin in a study condition. The results showed a 30% slower performance in find-
ing targets on the fake palm. In addition, Lin et al. [26] found slower but more accurate target
selection in using sliding input than tapping.
ACM Computing Surveys, Vol. 52, No. 4, Article 77. Publication date: August 2019.
77:10 J. Bergström and K. Hornbæk
commands can be communicated with deformations. Evaluating user performance with such in-
puts could help in characterizing the expressivity across different locations and input types on the
skin.
A second challenge is to increase the tracking resolution and accuracy in detecting input; cur-
rently, we do not know how large an area and how many targets or commands can be effectively
sensed. Tracking performance (e.g., the sensing accuracy across the number of targets) is one of
the basic measures of any input technology, and is necessary information for choosing which
technology to use for skin input. However, the studies rarely measured sensing performance sep-
arately from user performance. Following the approach of Lim et al. [24], the sensing accuracies
could be measured with robots. This would help in recognizing the suitable sensors early before
unnecessary work on machine learning and classification methods that are only needed for final
applications to interpret real user input. Further, the resolutions and tracking accuracies of current
sensing technologies could be improved by combining sensors. Optical sensing has already been
successfully combined with other sensing technologies to improve recognition rates of tapping
input [8, 20, 37, 38, 42, 45, 49]. Combining sensors could also allow tracking multiple input types,
such as deformation in addition to tapping.
ACM Computing Surveys, Vol. 52, No. 4, Article 77. Publication date: August 2019.
Human–Computer Interaction on the Skin 77:11
6 CONCLUSION
The skin is a wonderful opportunity for interaction designers and researchers. Touching the skin
is a natural and important means of communication. Touch has strong effects on well-being and
health, and on how people behave and feel about other people, products, and services [2, 43].
People naturally use their skin, for instance, to communicate participation to a sports team with
face paint, to visualize important things with tattoos, and to remind themselves of a call by writing
ACM Computing Surveys, Vol. 52, No. 4, Article 77. Publication date: August 2019.
77:12 J. Bergström and K. Hornbæk
a phone number on the hand [47]. The skin can offer an expressive, personal, touch sensitive,
always-on input surface for human–computer interaction. Leveraging skin-specific characteristics
in designing user interfaces is exciting but also challenging.
The purpose of this article was to synthesize what skin input is, which technologies can sense
input on the skin, and how to design interfaces for the skin. The reviewed studies show that input
on the skin already works; touch was used for controlling menus, giving commands in games,
drawing symbols, typing, controlling music players, and communicating emotions. The largest
challenges are technical, in ensuring robust high-resolution tracking of input also in mobile real-
life use. The largest opportunities are in making use of the expressive features of the skin, such as
its large surface, deformation, landmarks, and new types of feedback. We hope that by addressing
challenges for research in human–computer interaction on the skin, this article also helps in de-
veloping existing technologies further and in designing expressive and effective input types and
interfaces for the skin.
REFERENCES
[1] Joanna Bergstrom-Lehtovirta, Sebastian Boring, and Kasper Hornbæk. 2017. Placing and recalling virtual items on
the skin. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, 1497–1507.
[2] S. Adam Brasel and James Gips. 2014. Tablets, touchscreens, and touchpads: How varying touch interfaces trigger
psychological ownership and endowment. Journal of Consumer Psychology 24, 2 (2014), 226–233.
[3] Jesse Burstyn, Paul Strohmeier, and Roel Vertegaal. 2015. DisplaySkin: Exploring pose-aware displays on a flexible
electrophoretic wristband. In Proceedings of the 9h International Conference on Tangible, Embedded, and Embodied
Interaction. ACM, 165–172.
[4] Liwei Chan, Yi-Ling Chen, Chi-Hao Hsieh, Rong-Hao Liang, and Bing-Yu Chen. 2015. Cyclopsring: Enabling whole-
hand and context-aware interactions through a fisheye ring. In Proceedings of the 28th Annual ACM Symposium on
User Interface Software and Technology. ACM, 549–556.
[5] Liwei Chan, Rong-Hao Liang, Ming-Chang Tsai, Kai-Yin Cheng, Chao-Huai Su, Mike Y. Chen, Wen-Huang Cheng,
and Bing-Yu Chen. 2013. FingerPad: Private and subtle interaction using fingertips. In Proceedings of the 26th Annual
ACM Symposium on User Interface Software and Technology. ACM, 255–260.
[6] David Coyle, James Moore, Per Ola Kristensson, Paul Fletcher, and Alan Blackwell. 2012. I did that! Measuring users’
experience of agency in their own actions. In Proceedings of the SIGCHI Conference on Human Factors in Computing
Systems. ACM, 2025–2034.
[7] Artem Dementyev and Joseph A. Paradiso. 2014. WristFlex: Low-power gesture input with wrist-worn pressure sen-
sors. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology. ACM, 161–166.
[8] Niloofar Dezfuli, Mohammadreza Khalilbeigi, Jochen Huber, Murat Özkorkmaz, and Max Mühlhäuser. 2014. PalmRC:
Leveraging the palm surface as an imaginary eyes-free television remote control. Behaviour & Information Technology
33, 8 (2014), 829–843.
[9] Madeline Gannon, Tovi Grossman, and George Fitzmaurice. 2015. Tactum: A skin-centric approach to digital design
and fabrication. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM,
1779–1788.
[10] Sean Gustafson, Christian Holz, and Patrick Baudisch. 2011. Imaginary phone: Learning imaginary interfaces by
transferring spatial memory from a familiar device. In Proceedings of the 24th Annual ACM Symposium on User Inter-
face Software and Technology. ACM, 283–292.
[11] Sean G. Gustafson, Bernhard Rabe, and Patrick M. Baudisch. 2013. Understanding palm-based imaginary interfaces:
The role of visual and tactile cues when browsing. In Proceedings of the SIGCHI Conference on Human Factors in
Computing Systems. ACM, 889–898.
[12] Chris Harrison, Hrvoje Benko, and Andrew D. Wilson. 2011. OmniTouch: Wearable multitouch interaction every-
where. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology. ACM, 441–450.
[13] Chris Harrison, Shilpa Ramamurthy, and Scott E. Hudson. 2012. On-body interaction: Armed and dangerous. In
Proceedings of the 6th International Conference on Tangible, Embedded and Embodied Interaction. ACM, 69–76.
[14] Chris Harrison, Desney Tan, and Dan Morris. 2010. Skinput: Appropriating the body as an input surface. In Proceed-
ings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 453–462.
[15] Hayati Havlucu, Mehmet Yarkın Ergin, Idil Bostan, Oğuz Turan Buruk, Tilbe Göksun, and Oğuzhan Özcan. 2017. It
made more sense: Comparison of user-elicited on-skin touch and freehand gesture sets. In International Conference
on Distributed, Ambient, and Pervasive Interactions. Springer, 159–171.
ACM Computing Surveys, Vol. 52, No. 4, Article 77. Publication date: August 2019.
Human–Computer Interaction on the Skin 77:13
[16] Christian Holz, Tovi Grossman, George Fitzmaurice, and Anne Agur. 2012. Implanted user interfaces. In Proceedings
of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 503–512.
[17] Hsin-Liu Cindy Kao, Christian Holz, Asta Roseway, Andres Calvo, and Chris Schmandt. 2016. DuoSkin: Rapidly proto-
typing on-skin user interfaces using skin-friendly materials. In Proceedings of the 2016 ACM International Symposium
on Wearable Computers. ACM, 16–23.
[18] Takashi Kikuchi, Yuta Sugiura, Katsutoshi Masai, Maki Sugimoto, and Bruce H. Thomas. 2017. EarTouch: Turning
the ear into an input surface. In Proceedings of the 19th International Conference on Human-Computer Interaction with
Mobile Devices and Services. ACM, 27.
[19] Scott Klemmer. 2011. Technical perspective: Skintroducing the future. Communications of the ACM 54, 8 (2011).
[20] Jarrod Knibbe, Diego Martinez Plasencia, Christopher Bainbridge, Chee-Kin Chan, Jiawei Wu, Thomas Cable, Hassan
Munir, and David Coyle. 2014. Extending interaction for smart watches: Enabling bimanual around device control.
In CHI’14 Extended Abstracts on Human Factors in Computing Systems. ACM, 1891–1896.
[21] Jarrod Knibbe, Sue Ann Seah, and Mike Fraser. 2014. VideoHandles: Replicating gestures to search through action-
camera video. In Proceedings of the 2nd ACM Symposium on Spatial User Interaction. ACM, 50–53.
[22] Gierad Laput, Robert Xiao, Xiang’Anthony’ Chen, Scott E. Hudson, and Chris Harrison. 2014. Skin buttons: Cheap,
small, low-powered and clickable fixed-icon laser projectors. In Proceedings of the 27th Annual ACM Symposium on
User Interface Software and Technology. ACM, 389–394.
[23] Juyoung Lee, Hui Shyong Yeo, Murtaza Dhuliawala, Jedidiah Akano, Junichi Shimizu, Thad Starner, Aaron Quigley,
Woontack Woo, and Kai Kunze. 2017. Itchy nose: Discreet gesture interaction using EOG sensors in smart eyewear. In
Proceedings of the 29th ACM International Symposium on Wearable Computers (ISWC ’17). Association for Computing
Machinery.
[24] Soo-Chul Lim, Jungsoon Shin, Seung-Chan Kim, and Joonah Park. 2015. Expansion of smartwatch touch interface
from touchscreen to around device interface using infrared line image sensors. Sensors 15, 7 (2015), 16642–16653.
[25] Jhe-Wei Lin, Chiuan Wang, Yi Yao Huang, Kuan-Ting Chou, Hsuan-Yu Chen, Wei-Luan Tseng, and Mike Y. Chen.
2015. Backhand: Sensing hand gestures via back of the hand. In Proceedings of the 28th Annual ACM Symposium on
User Interface Software and Technology. ACM, 557–564.
[26] Shu-Yang Lin, Chao-Huai Su, Kai-Yin Cheng, Rong-Hao Liang, Tzu-Hao Kuo, and Bing-Yu Chen. 2011. Pub-point upon
body: Exploring eyes-free interaction and methods on an arm. In Proceedings of the 24th Annual ACM Symposium on
User Interface Software and Technology. ACM, 481–488.
[27] Roman Lissermann, Jochen Huber, Aristotelis Hadjakos, Suranga Nanayakkara, and Max Mühlhäuser. 2014. EarPut:
Augmenting ear-worn devices for ear-based interaction. In Proceedings of the 26th Australian Computer-Human In-
teraction Conference on Designing Futures: the Future of Design. ACM, 300–307.
[28] Joanne Lo, Doris Jung Lin Lee, Nathan Wong, David Bui, and Eric Paulos. 2016. Skintillates: Designing and creating
epidermal interactions. In Proceedings of the 2016 ACM Conference on Designing Interactive Systems. ACM, 853–864.
[29] Christian Loclair, Sean Gustafson, and Patrick Baudisch. 2010. PinchWatch: A wearable device for one-handed mi-
crointeractions. In Proceedings of MobileHCI, Vol. 10.
[30] Pedro Lopes, Doăa Yüksel, François Guimbretiere, and Patrick Baudisch. 2016. Muscle-plotter: An interactive system
based on electrical muscle stimulation that produces spatial output. In Proceedings of the 29th Annual Symposium on
User Interface Software and Technology. ACM, 207–217.
[31] Yasutoshi Makino, Yuta Sugiura, Masa Ogata, and Masahiko Inami. 2013. Tangential force sensing system on forearm.
In Proceedings of the 4th Augmented Human International Conference. ACM, 29–34.
[32] Adiyan Mujibiya, Xiang Cao, Desney S. Tan, Dan Morris, Shwetak N. Patel, and Jun Rekimoto. 2013. The sound of
touch: On-body touch and gesture sensing based on transdermal ultrasound propagation. In Proceedings of the 2013
ACM International Conference on Interactive Tabletops and Surfaces. ACM, 189–198.
[33] Kei Nakatsuma, Rhoma Takedomi, Takaaki Eguchi, Yasutaka Oshima, and Ippei Torigoe. 2015. Active bioacoustic
measurement for human-to-human skin contact area detection. In 2015 IEEE SENSORS. IEEE, 1–4.
[34] Masa Ogata and Michita Imai. 2015. SkinWatch: Skin gesture interaction for smart watch. In Proceedings of the 6th
Augmented Human International Conference. ACM, 21–24.
[35] Masa Ogata, Yuta Sugiura, Yasutoshi Makino, Masahiko Inami, and Michita Imai. 2013. SenSkin: Adapting skin as a
soft interface. In Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology. ACM,
539–544.
[36] Masa Ogata, Yuta Sugiura, Yasutoshi Makino, Masahiko Inami, and Michita Imai. 2014. Augmenting a wearable dis-
play with skin surface as an expanded input area. In International Conference of Design, User Experience, and Usability.
Springer, 606–614.
[37] Uran Oh and Leah Findlater. 2014. Design of and subjective response to on-body input for people with visual im-
pairments. In Proceedings of the 16th International ACM SIGACCESS Conference on Computers and Accessibility. ACM,
115–122.
ACM Computing Surveys, Vol. 52, No. 4, Article 77. Publication date: August 2019.
77:14 J. Bergström and K. Hornbæk
[38] Uran Oh and Leah Findlater. 2015. A performance comparison of on-hand versus on-phone nonvisual input by blind
and sighted users. ACM Transactions on Accessible Computing (TACCESS) 7, 4 (2015), 14.
[39] Ryosuke Ono, Shunsuke Yoshimoto, and Kosuke Sato. 2013. Palm+ Act: Operation by visually captured 3D force on
palm. In SIGGRAPH Asia 2013 Emerging Technologies. ACM, 14.
[40] Antti Oulasvirta and Joanna Bergstrom-Lehtovirta. 2011. Ease of juggling: Studying the effects of manual multitask-
ing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 3103–3112.
[41] Manuel Prätorius, Aaron Scherzinger, and Klaus Hinrichs. 2015. SkInteract: An On-body interaction system based
on skin-texture recognition. In Human-Computer Interaction. Springer, 425–432.
[42] Manuel Prätorius, Dimitar Valkov, Ulrich Burgbacher, and Klaus Hinrichs. 2014. DigiTap: An eyes-free VR/AR sym-
bolic input device. In Proceedings of the 20th ACM Symposium on Virtual Reality Software and Technology. ACM, 9–18.
[43] Zdravko Radman. 2013. The Hand, an Organ of the Mind: What the Manual Tells the Mental. MIT Press, 108–119.
[44] Munehiko Sato, Ivan Poupyrev, and Chris Harrison. 2012. Touché: Enhancing touch interaction on humans, screens,
liquids, and everyday objects. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM,
483–492.
[45] Marcos Serrano, Barrett M. Ens, and Pourang P. Irani. 2014. Exploring the use of hand-to-face input for interacting
with head-worn displays. In Proceedings of the 32nd Annual ACM Conference on Human Factors in Computing Systems.
ACM, 3181–3190.
[46] Srinath Sridhar, Anders Markussen, Antti Oulasvirta, Christian Theobalt, and Sebastian Boring. 2017. WatchSense:
On-and above-skin input sensing through a wearable depth sensor. In Proceedings of the 2017 CHI Conference on
Human Factors in Computing Systems. ACM, 3891–3902.
[47] Paul Strohmeier, Juan Pablo Carrascal, and Kasper Hornbæk. 2016. What can doodles on the arm teach us about
on-body interaction? In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing
Systems. ACM, 2726–2735.
[48] Ying-Chao Tung, Chun-Yen Hsu, Han-Yu Wang, Silvia Chyou, Jhe-Wei Lin, Pei-Jung Wu, Andries Valstar, and Mike
Y. Chen. 2015. User-defined game input for smart glasses in public space. In Proceedings of the 33rd Annual ACM
Conference on Human Factors in Computing Systems. ACM, 3327–3336.
[49] Cheng-Yao Wang, Wei-Chen Chu, Po-Tsung Chiu, Min-Chieh Hsiu, Yih-Harn Chiang, and Mike Y. Chen. 2015. Palm-
Type: Using palms as keyboards for smart glasses. In Proceedings of the 17th International Conference on Human-
Computer Interaction with Mobile Devices and Services. ACM, 153–160.
[50] Cheng-Yao Wang, Min-Chieh Hsiu, Po-Tsung Chiu, Chiao-Hui Chang, Liwei Chan, Bing-Yu Chen, and Mike Y. Chen.
2015. PalmGesture: Using palms as gesture interfaces for eyes-free input. In Proceedings of the 17th International
Conference on Human-Computer Interaction with Mobile Devices and Services. ACM, 217–226.
[51] Yuntao Wang, Chun Yu, Lin Du, Jin Huang, and Yuanchun Shi. 2014. BodyRC: Exploring interaction modalities using
human body as lossy signal transmission medium. In Proceedings of the 2014 IEEE 11th International Conference on
Ubiquitous Intelligence and Computing, IEEE 11th International Conference on Autonomic and Trusted Computing, and
IEEE 14th International Conference on Scalable Computing and Communications and Its Associated Workshops (UTC-
ATC-ScalCom’14). IEEE, 260–267.
[52] Martin Weigel, Tong Lu, Gilles Bailly, Antti Oulasvirta, Carmel Majidi, and Jürgen Steimle. 2015. Iskin: Flexible,
stretchable and visually customizable on-body touch sensors for mobile computing. In Proceedings of the 33rd Annual
ACM Conference on Human Factors in Computing Systems. ACM, 2991–3000.
[53] Martin Weigel, Vikram Mehta, and Jürgen Steimle. 2014. More than touch: Understanding how people use skin as an
input surface for mobile computing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems.
ACM, 179–188.
[54] Martin Weigel, Aditya Shekhar Nittala, Alex Olwal, and Jürgen Steimle. 2017. SkinMarks: Enabling interactions on
body landmarks using conformal skin electronics. In Proceedings of the 2017 CHI Conference on Human Factors in
Computing Systems. ACM, 3095–3105.
[55] Yang Zhang, Junhan Zhou, Gierad Laput, and Chris Harrison. 2016. Skintrack: Using the body as an electrical waveg-
uide for continuous finger tracking on the skin. In Proceedings of the 2016 CHI Conference on Human Factors in Com-
puting Systems. ACM, 1491–1503.
ACM Computing Surveys, Vol. 52, No. 4, Article 77. Publication date: August 2019.