0% found this document useful (0 votes)
7 views14 pages

2019 - Human-Computer Interaction On The Skin

Uploaded by

anoragao1223
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views14 pages

2019 - Human-Computer Interaction On The Skin

Uploaded by

anoragao1223
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Human–Computer Interaction on the Skin

JOANNA BERGSTRÖM and KASPER HORNBÆK, University of Copenhagen

The skin offers exciting possibilities for human–computer interaction by enabling new types of input and
feedback. We survey 42 research papers on interfaces that allow users to give input on their skin. Skin-based
interfaces have developed rapidly over the past 8 years but most work consists of individual prototypes,
with limited overview of possibilities or identification of research directions. The purpose of this article is to
synthesize what skin input is, which technologies can sense input on the skin, and how to give feedback to
the user. We discuss challenges for research in each of these areas.
CCS Concepts: • Human-centered computing → Interaction devices;
Additional Key Words and Phrases: Skin input, on-body interaction, tracking technologies
ACM Reference format:
Joanna Bergström and Kasper Hornbæk. 2019. Human–Computer Interaction on the Skin. ACM Comput. Surv.
52, 4, Article 77 (August 2019), 14 pages.
https://doi.org/10.1145/3332166

1 INTRODUCTION
Our skin is an attractive platform for user interfaces. The skin provides a large surface for input
that is always with us and that enables rich types of interaction. We can, for instance, navigate the
menu of a mobile phone by sliding a finger across invisible controls on the palm [11] or control a
computer by tapping shortcuts on the forearm [26]. User input can be estimated using cameras [4,
12, 42] or acoustic signals propagating on the skin [14, 26, 32].
The skin enables exciting possibilities for interaction and user interface design. First, the skin
enables new input types. It can be touched, grabbed, pulled, pressed, scratched, sheared, squeezed, 77
and twisted [53], and we easily relate meanings to these actions, such as equating a strong grab
with anger. Second, skin-based interfaces free us from carrying mobile devices in our hands and
extends their input areas to support off-screen input. Our skin surface is hundreds of times larger
than the touchscreen of an average mobile phone [19] and can be turned into an input device using
a watch or an armband worn on the body. Third, feeling input on the skin can help us achieve better
user experience and effectiveness than when us ing common input devices [19].
Since the widely cited paper on skin input by Harrison et al. [14] was published in 2010, re-
searchers have developed many novel technologies for skin-based interfaces. The pros and cons of
those technologies have not been systematically compared. Moreover, we understand little about

This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020
Research and Innovation Program (grant agreement 648785).
Authors’ address: J. Bergström and K. Hornbæk, University of Copenhagen, Department of Computer Science, Sigurdsgade
41, 1st fl, 2200 Copenhagen, Denmark; emails: {joanna, kash}@di.ku.dk.
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee
provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and
the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored.
Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires
prior specific permission and/or a fee. Request permissions from [email protected].
© 2019 Association for Computing Machinery.
0360-0300/2019/08-ART77 $15.00
https://doi.org/10.1145/3332166

ACM Computing Surveys, Vol. 52, No. 4, Article 77. Publication date: August 2019.
77:2 J. Bergström and K. Hornbæk

the benefits of skin input and how to lay out controls and give feedback on the skin. The aim
of this article is to step back and synthesize what we know about human–computer interaction
on the skin, and outline key challenges for research. We also aim to move beyond the oddities of
individual prototypes toward some general insights on new opportunities.

1.1 Literature for the Survey


This article presents findings of a systematic literature review on skin input. We reviewed a sam-
ple of 42 papers published between 2010 and 2017 on human–computer interaction where users
provide input on their bare skin. We include input types where the effector is a body part (such as
the finger tip) and it is used to provide input on one’s own body (such as on the forearm). There-
fore, input performed with devices, such as styluses [41], input on external material on the skin
[3, 52], or input on other people [33] are excluded. The interactions included are taps or gestures
performed in skin contact and gestures that are not fully performed in contact but are interpreted
from the contacts. An example of the latter is a double tap that is estimated from two contact
on-sets that occur within a certain time interval. However, we do not include input which is per-
formed and measured as body postures instead of skin contact, even if it would involve contact,
such as pinching gestures [7, 21, 25, 29] or connecting hands [13].
We used two exact queries (“skin input” and “on-body interaction”) in parallel to search litera-
ture with Google Scholar. Subsequently, we conducted a forward chaining reference search from
the four earliest publications (from 2010 and 2011) in the sample [10, 12, 14, 26]. We screened
through the titles and included only full papers, notes, and archived extended abstracts from
posters, therefore excluding workshops, thesis works, talks, patents, and technical reports. When
the title did not provide an explicit reason for exclusion, we screened the whole publication to
evaluate it for inclusion. This led to a sample of 42 papers for the review.
We use the collected sample to survey what skin input is, which technologies can sense input
on the skin, and how to design skin-based interfaces. In Section 2, we catalog types of skin input.
The most common types are similar to the tapping and touch gestures used with touchscreens.
Consequently, our understanding of user experience and performance in skin-based interaction
is largely limited to these two input types. Section 3 describes technologies for sensing input on
the skin and outlines their strengths and weaknesses. We find that technical performance was
evaluated only in artificial conditions, such as with a single, fixed posture of the hand or with a
few touch targets. In Section 4, we discuss the broader challenges of designing interfaces for the
skin, for instance, how to use the shape of the body and landmarks on the skin as visual and tactile
cues for input and how to design interface layouts on external displays that are easy for users to
map onto the skin. On this basis, we identify and discuss the open challenges and next steps for
research in human–computer interaction on the skin in Section 5.

2 INPUT ON THE SKIN


The skin is frequently touched in communication: in handshakes, patting a child’s cheek, and
clapping. These natural forms of interaction and the deformable, always-on, large touch-sensitive
surface enable new types of input and new possibilities for expressive input. Next, we discuss how
may users give input on the skin, and when are which forms of input effective.

2.1 Types of Input


The most common input type in the reviewed studies was tapping. Tapping was employed in 69%
of the studies, but some of these used multiple types of input (50% of the studies used multiple
input types, therefore percentages do not add up). Tapping was used in selecting discrete touch

ACM Computing Surveys, Vol. 52, No. 4, Article 77. Publication date: August 2019.
Human–Computer Interaction on the Skin 77:3

Fig. 1. The four types of skin input. The area of skin contact can be varied, e.g., by touching with one to
multiple fingers, or by bringing some fingers or the whole palms together. The skin can be deformed, e.g., by
pressing, pushing, or pulling the skin with one finger, or by pinching with the index finger and the thumb.
Touch gestures include, e.g., drawing shapes, hand writing letters, or controlling a continuous slider. Discrete
touch input can be given by tapping, or by sliding the finger on the skin and selecting a target by lifting the
finger.

targets (Figure 1), similar to touching keys on a touchscreen. Another form of input for selecting
discrete targets is sliding a finger across the skin and selecting the target by lifting the finger or
double tapping (used in 13% of the studies). In a study by Gustafson et al. [11], participants used
sliding input to find menu items of a mobile phone on their palm. When sliding across an item
the participants heard the item name as audio feedback, and could select it using a double tap. Lin
et al. [26] examined how many targets blindfolded participants could distinguish on their forearm.
The participants were free to choose a strategy for selection. Three strategies emerged: tapping,
sliding, and jumping (i.e., tapping along the forearm until the selected location is reached). Most
studies (93%) used the index finger for tapping, and many (24%) also the thumb; a few studies
allowed using both. Sliding input was always performed with the index finger.
Touch gestures, such as flicking, swiping, panning, zooming, and drawing shapes were used as
input in 37% of the studies. These gestures are similar to those used with touchscreens. With touch
gestures the users can, for instance, input letters by drawing on their palm [4, 50]. Touch gestures
do not require users to input on an absolute location on the skin, but are recognized based on
patterns.
Varying the area of skin contact by touching the skin with multiple fingers or with the whole
palm was used as input in 30% of the studies. These inputs include tapping with one to four fingers
[51], grasping the forearm using some of the fingers and the thumb [32], and bringing fingers or
whole palms together to vary the area of skin contact between the hands [44]. The user can, for
instance, select multiple targets on fewer tap locations by varying the number of tapping fingers.
In about a third (33%) of the papers, users deformed the skin as input. The skin can, for instance,
be pressed [39], pushed [35], or pinched [34]. Deformation input can be used to select a point and
a magnitude on a slider [36], for expressing emotions [53], or for controlling 3D models with just

ACM Computing Surveys, Vol. 52, No. 4, Article 77. Publication date: August 2019.
77:4 J. Bergström and K. Hornbæk

one finger [39]. Only deformation of the skin introduced new types of input compared to on-device
touches.

2.2 Locations of Input


The hands and the fingers were the most common locations for input (87% of the studies), usually
on the palm side of the hand. In 67% of the studies, input was given on the wrist and forearm.
Other locations were also used; Mujibiya et al. [32] and Sato et al. [44] studied input on the head,
Serrano et al. [45] examined input on the face for smart glasses, Lee et al. [23] used the nose for
input, and Lissermann et al. [27] and Kikuchi et al. [18] studied input on the ear. The thumb was
mostly used for single-handed input on the same hand’s fingers, but Oh and Findlater [37] found
that users also provide input with the thumb on the non-dominant hand, gripping it similarly to a
mobile phone.

2.3 Evaluating Input on the Skin


The surveyed studies show that human–computer interaction on the skin is not just a concept for
the future; skin input already works. The effectiveness of skin input compared to touchscreens,
however, varies based on study conditions. For example, projected touch targets on the palm and
forearm need to be larger for accurate tapping compared to touchscreens [12], while typing on
the palm has been shown to be more accurate than on a touchscreen when the output is displayed
with smart glasses and the participants’ view of their hand is obstructed [49].
The user performances with input types other than tapping have rarely been studied. Therefore,
the input types are hard to compare. Only Oh and Findlater [38] studied user performance with
touch gestures, and no study measured user performance with gestures that vary the area of skin
contact or with deformation of the skin.
The success of skin input depends on how users experience it. The studies suggested that users
prefer skin input compared to input on touchscreens, especially when visual feedback is limited.
For example, 9 of 12 blindfolded participants in a study by Oh and Findlater [38] preferred to tap
and draw on the palm compared to a mobile phone. Serrano et al. [45] showed that participants
preferred to perform gestures on their face compared to a head-worn device. Gesturing on the
cheeks, for instance, was found less tiring than gesturing on the temple of smart glasses. Havlucu
et al. [15] found that on-skin gestures experienced werer physically less demanding and more
socially acceptable than mid-air gestures.
The studies also collected subjective measures of factors that influence the user experience,
such as comfort, frustration, mental demands, preference, and sense of urgency. These were used
to compare skin-based interfaces to existing interfaces [8], to compare interface layouts on the
skin [49], and to evaluate input gestures [45, 48, 53]. For example, continuous tapping was found
to be a preferred gesture for “force close” and “emergency call” commands, and for indicating
“boredom” and “nervousness” [53]. Interestingly, participants occasionally chose uncomfortable
but meaningful actions for input. For example, they preferred the input for “anger,” such as twisting
or squeezing the skin, to hurt a little [53].
The effects of input locations were examined both on input performance and on preference of
input types. Harrison et al. [12] concluded that to achieve 95% input accuracy with their system
on nine touch targets on the palm, the diameter of the targets should be 22.5 mm at minimum, and
targets on the forearm that are not centered require a larger diameter of 25.7 mm. They suggested
that the larger inaccuracy on the forearm was caused by the “curved” sides of the input area. Oh
and Findlater [37] also found that the hand is the preferred area for input compared to the forearm,
head, and face. They suggested that social acceptability of input location is more important than
its ease of use and physical comfort. Weigel et al. [53] compared eight input modalities and six

ACM Computing Surveys, Vol. 52, No. 4, Article 77. Publication date: August 2019.
Human–Computer Interaction on the Skin 77:5

locations on the non-dominant hand and arm on their perceived ease and comfort. They found
that the perceived ease of input modalities depends on the location. The skin on the palm, for
instance, is difficult to deform (e.g., with twisting or pulling), yet it was the preferred location for
touch input.

3 TECHNOLOGIES FOR SKIN INPUT


Sensing input on bare skin requires new technical solutions. Keyboards and touchscreens cover
most or the whole input area with sensors, allowing robust sensing. Such direct sensing on the
skin, however, requires inventive placement or implanting, and indirect sensing is less robust.
Next we discuss the technical challenges in tracking input on the skin.

3.1 Types of Sensing


From the 28 papers in our sample that tracked input, three main types of sensing technologies
emerged: optical sensing, sensing of mechanical or electrical changes in the skin, and touch sensors
placed directly on or underneath the skin (Figure 2).
Optical sensing can be used for tracking the location of the inputting finger on the skin as a
distance from the sensor (i.e., as depth). Sensing systems from this category were the most com-
mon in the studies (86%). The systems include motion capture systems, still cameras, and infrared
sensors.
Motion capture systems, such as Vicon and OptiTrack, use markers attached to the skin surface
(e.g., a palm) and to the part of the body that gives input. Markers can be attached, for instance,
on top of the finger that is used for input to leave the fingertip uncovered [1, 11]. Those markers
are tracked by multiple cameras around the user to infer their locations.
Systems using infrared sensors and ultrasonic rangefinders emit light or sound, which reflect from
the inputting finger. The finger location is then estimated from the angle, amplitude, and other sig-
nal features of the reflection and its travel time. These sensors are small and often integrated in
watches or armbands. For example, watch-based IR sensors have been pointed toward the knuck-
les to track taps and touch gestures on the back side of the hand [20, 24, 42, 49], and ultrasonic
rangefinders pointed toward the elbow to track taps on the forearm [26]. IR sensors have also
been used to measure the vertical distance from a watch to the skin of the forearm [31, 34–36].
This distance represented the deformation of the skin, and the system used it to interpret the force
and direction of a finger press or pinch.
Image processing can be used for detecting the form of the skin surface. One approach is to
estimate the edges of the hands and fingers by extracting the skin from the background of the
images and, with these edges, classify touch locations with machine learning [46]. For example,
thumb taps on the same hand’s digit fingers can be estimated by classifying postures by shapes
of extracted images of the skin [4] or by creating reference points on the thumb tip and on the
joints of each digit, inferring the tap locations based on the closeness of those points [42]. Another
approach is flood filling techniques which can be used to distinguish two surfaces or their distance
from each other, and thereby estimate when touch occurs [12]. A third approach is to only process
images of the skin that act as an input surface. Ono et al. [39], for instance, used optical flow vectors
of a palm print and estimated the 3D force applied on the palm from the deformation that these
vectors represent.
On-skin sensing was used in 46% of the studies. This is done by measuring changes in how sig-
nals propagate on and through the skin. The signals include sound and ultrasound waves, mechan-
ical vibrations caused by taps, and impedance, indicating a change in electrical circuits involving
the body and therefore a touch between two parts of the skin. Input is estimated by comparing the
measured signals to signal data representing a known input. The sensors and emitters are small

ACM Computing Surveys, Vol. 52, No. 4, Article 77. Publication date: August 2019.
77:6 J. Bergström and K. Hornbæk

Fig. 2. The three types of sensing technologies. Optical sensing detects location well and can achieve large
input resolution, but reliable detection of skin contact is hard. Users need to maintain certain postures so
as to not obscure the line of sight to the interactive area on the skin. On-skin sensing is good in detecting
skin contact and has been shown to achieve a resolution of 80 touch targets, but requires more processing
of sensor data and is sensitive to user’s movement. Touch sensors are excellent in detecting if skin contact
occurs, and do not restrict movement, but have a binary resolution, and are hard to implement or invasive.

and can be attached on the skin outside the input area, leaving it uncovered. On-skin systems can
distinguish the size of the contact area on the skin [44, 51], tap locations [14, 32], or can imply
whether a tap occurred [6].
Touch sensors detect touch directly. They include capacitive touch sensors, piezo-electrical
sensors, and pressure sensors. Such sensors were used in 21% of the studies. Touch sensors can be
placed in three ways to enable direct sensing and touch on bare skin. First, small capacitive touch
sensors can be placed on a fingertip, while still leaving most of it uncovered [8, 38, 49]. Second,
the sensors can be placed behind appendages of the body, such as behind the ear lobe [27]. Third,
the touch sensors can be implanted and sense touch through the skin [16]. Capacitive sensors are
binary, but pressure sensors can allow detecting multiple levels of pressure through the skin [16].

3.2 Technical Performance


Recognition accuracies of the sensing systems vary between different types of sensing. Optical
sensing generally performs well in detecting the location of the inputting finger on the skin but is
often inaccurate in recognizing whether a touch occurred or not. The reason is that these systems
cannot separate the fingertip from the skin that acts as an input surface (e.g., the palm). By using
image processing for sensing, CyclopsRing achieved an 84.75% recognition rate for seven one-hand
tap gestures [4], and DigiTap was able to correctly classify 91.9% of 12 thumb tap locations on the
digit finger joints [42]. PalmType used IR sensors and was able to recognize an average of 74.5%
of the taps on 28 keys on the palm [49].
On-skin sensing systems were used for tracking touch gestures and the area of skin contact in
addition to tapping. The BodyRC system achieved 86.76% accuracy in distinguishing five locations

ACM Computing Surveys, Vol. 52, No. 4, Article 77. Publication date: August 2019.
Human–Computer Interaction on the Skin 77:7

(e.g., the nail of the middle finger) touched with one finger or the whole palm and sliding down-
ward and upward [51]. The ultrasound sensing developed by Mujibiya et al. [32] obtained 86.24%
accuracy in distinguishing grasps with one to four fingers, and 94.72% accuracy on distinguishing
points on the palm and on the back of the hand.
Sensors have been combined to achieve better recognition rates of input. For example, optical
systems were combined with capacitive touch sensors [8, 37, 38, 49], proximity sensors [45], ac-
celerometers [42], and piezo-electrical sensors [20] to reliably detect touch or to trigger location
sensing. Combining optical sensing and touch sensors is beneficial because the former recognizes
the location of touch well, while the latter recognizes the occurrence of a touch well.
The resolution of optical sensing to track location of input is usually high. For example, the
Vicon motion tracking system can track the marker position with a 1 mm accuracy [49], and the
watch-based IR sensing by Lim et al. [24] was able to detect the robot finger location on the back of
the hand with the x and y directional accuracies of 7.02mm and 4.32mm. In contrast, with on-skin
sensing the ability to track multiple target locations depends on the signal features, and machine
learning and classification capabilities of the system. The Skinput system, for instance, used on-
skin sensing of acoustic signals and classified taps on 10 targets with 81.5% accuracy [14], and the
SkinTrack system reached mean distance error of 8.9mm on 80 targets. With touch sensors, the
number of targets depends on the number of sensors because the sensing is binary.
Only one system tracked both sides of the hand and forearm, although separately [32]. On-skin
sensing was used to track input on targets placed around the forearm or the wrist, on two rows
on opposite sides of the forearm, or as two single targets on both sides of the hand. Three studies
tracked one side of both the hand and the forearm together with on-skin sensing [14, 51, 55]. None
of the systems tracked input around the entire forearm and hand.
Movement of the user can cause noise to the signals that are tracked to recognize input, and
therefore hamper the performance of tracking systems. Touch sensors suffer the least from noise,
and with those, the users are free to move and change their body posture. In contrast, on-skin
sensing technologies are prone to noise caused by movements of the user because the tracked
signals propagate through the body, and muscle activation interferes with this.
Optical sensing places the most restrictions on the user’s mobility; maintaining line of sight
to the input area is necessary. Thus, most studies of optical sensing used fixed hand postures,
preventing, for instance, flexion of the wrist, which could bring the input surface of the hand
too close to the sensor, potentially confusing it with the finger. For example, the wrist joint was
restricted to a neutral pose to track taps on the hand with cameras and IR sensors attached to
a wristband [20, 24, 42], and the palm was kept flat and still by affixing it onto a table surface
[49] or to a cardboard equipped with markers [8]. These approaches also help to detect when the
fingertip touches the skin. Furthermore, most optical systems are sensitive to lighting conditions
and need to minimize light pollution from the environment. For example, an LED flash [42] or
infrared light emitters can help in highlighting the skin of the finger to distinguish it from other
surfaces reflecting light further away [24, 49].
To summarize, optical sensing may achieve the highest resolution, while touch sensors may
allow the most freedom to user’s posture and movement. Optical and on-skin sensing show the
potential for tracking many types of input, and on-skin and touch sensors can complement optical
sensing for more robust touch detection.

4 DESIGNING FOR THE SKIN


As seen in previous sections, many possibilities for input on the skin exist. For other types of
interfaces, we know much about how to design feedback, layout commands, and so on; how to do
that for on-skin interaction is less clear. For instance, people mark their skin with make-up, tattoos,

ACM Computing Surveys, Vol. 52, No. 4, Article 77. Publication date: August 2019.
77:8 J. Bergström and K. Hornbæk

sports paints, and notes [47]; in doing so the locations and types of such marks have strong effects
on how they are perceived; similarly, the location of touch on the body have personal and social
significance. Next, we discuss what this means for designing user interfaces for the skin.

4.1 Mapping Input to the Features of the Skin


Layouts of input controls on the skin can be designed by simply copying the grids from keyboards
and touchscreens, or by adapting the layouts to the features of the skin and to the shape of the
body. Visual interfaces inevitably present some spatial layout. When the layout is presented on an
external device, the users need to untangle how the displayed targets are organized on the skin (i.e.,
to solve the mapping). Dezfuli et al. [8] asked participants to design a mapping by placing remote
control keys on their palm and fingers. The participants preferred a layout that is consistent with
the keys of a conventional TV remote control. In this layout the directional keys are located at
the corners of the square that the palm forms, and the selection button at the middle of the palm.
Moreover, the participants preferred to hold the palm diagonally, allowing a natural alignment of
the directional keys.
In addition to user preference, layout designs also influence input performance. Wang et al. [49]
examined typing performance on the palm with QWERTY keys which were displayed on smart
glasses. They compared a common QWERTY layout on a touchpad and on the palm, and a user-
defined layout on the palm, for which the participants had designed the shapes and locations of
the keys. They found that the participants typed 15% faster on a normal palm-based QWERTY
layout, and 39% faster on a user-designed palm-based QWERTY layout than on a touchpad.
Common anatomical or personal features of the skin can act as landmarks and guide the user
in input. The studies mentioned various landmarks, such as the joints of arm and fingers [11,
42], variations between concave and convex areas on the skin [8], and visual landmarks such as
freckles, veins, and tattoos [9]. For example, the participants of Gannon et al. [9] used freckles and
veins as guides for performing touch gestures. The participants of Dezfuli et al. [8] and Wang et al.
[49] also used landmarks in placing the keys; they followed the shape of the palm.
The landmarks are suggested to help in input performance. For example, Gustafson et al. [10]
used an imaginary grid layout with anchoring points on the thumb tip, the index finger tip, and
the point where the thumb and index join and showed that participants tapped most accurately
on those, while tapping accuracy significantly decreased the farther the target was located from
these landmarks. In another paper, Gustafson et al. [11] showed that even without visual feedback,
users benefited from seeing their bare hands and achieved twice as fast target acquisition speed
compared to a blindfolded condition. In addition, Lin et al. [26] showed significantly higher tapping
accuracies on locations at the elbow and wrist compared to the other three locations along the
forearm.

4.2 Feedback for Skin Input


Skin input has been suggested to be used with desktop or remote displays (24%), smart watches
(24%), smart glasses (14%), headphones (12%), projectors (5%), mobile phones (5%), and VR glasses
(2%). The VR and smart glasses provided feedback elsewhere than on the skin [4, 27, 45], although
they could also be used to augment the skin with visual feedback. However, none of the interfaces
in our sample did so. Projectors were used for presenting feedback on the skin [12, 14], and smart
watches were used for both presenting feedback on the watch face [36], and projecting targets
next to the watch on the skin [22].
Users were given some sort of feedback about skin input in 24 papers in the sample; in the
rest of the papers the users received no feedback, for instance when no interactive tasks were
involved. Most of the interfaces provided visual feedback (83%). Out of these, 19% projected the

ACM Computing Surveys, Vol. 52, No. 4, Article 77. Publication date: August 2019.
Human–Computer Interaction on the Skin 77:9

interface and displayed feedback (e.g., indicated target selection) directly on the skin where the
input was performed, allowing direct touch interaction similar to common touchscreens. Further
35% provided visual feedback on watch faces. Visual feedback was most frequently (62%) displayed
outside the body surface on external devices. Audio feedback was provided in 16% of the interfaces.
Surprisingly, none of the interfaces provided haptic feedback, and only one prototyped haptic
output by implanting an actuator [16].
Visual feedback is important for effective input. For example, Harrison et al. [14] found that
eyes-free tapping on the hand and forearm is on average 10.5% less accurate than tapping on
visual targets. Lissermann et al. [27] studied tapping on the ear, where no visual guidance was
available. Tapping on the ear lobe had an average accuracy of 80% on four, 64% on five, and 58%
on six targets. Gustafson et al. [11] found 19% faster performance in retrieving targets on the palm
by sliding input when participants were able to look at the palm compared to when they were
blindfolded.
Yet, the studies suggest that skin input is possible also when a user’s view of the hand is limited.
For example, the average typing speed on a QWERTY keyboard on the palm was 10.5 words per
minute [49], the key size needed for achieving over 90% touch accuracy on nine targets on the palm
side of the hand and fingers was 28mm [8], and the average accuracy for tapping five sections on
the forearm was 84% [26].
Feeling the inputting finger on the skin was suggested to help the users in finding targets
without visual feedback. For example, Gustafson et al. [11] examined the importance of such pas-
sive tactile feedback using a fake palm, and therefore prevented participants from feeling their
fingertip on the skin in a study condition. The results showed a 30% slower performance in find-
ing targets on the fake palm. In addition, Lin et al. [26] found slower but more accurate target
selection in using sliding input than tapping.

5 CHALLENGES FOR SKIN INPUT


The previous sections have characterized how we may give input on the skin, how that input may
be sensed, and how user interfaces for the skin can be designed. Next, we discuss what we on
that basis see as the main opportunities and the related research challenges for human–computer
interaction using the skin. Those challenges are in part shaped by the current prototypes and
studies, but are in part also about the coming years of research in on-skin input. They include how
to use the skin for expressive input, how to transfer on-skin interaction from the laboratory to real
use, and how to map user interfaces to the skin.

5.1 Toward Expressive Input


One expectation for on-skin input is that it can be more expressive, that is, communicate more
information with better accuracy and higher resolution. Currently, however, the evidence around
this expectation is limited.
One challenge is about developing adequate methods to characterize how expressive input can
be, not only on the hands but across all skin areas that might be sensed. Such methods are needed
to aid the development of skin input types, and to benchmark those against device-based in-
put. For instance, the skin provides a larger touch surface than hand-held touchscreens, and can
therefore accommodate more touch targets. Currently, however, we do not know how many tar-
gets users can effectively select across the entire hand and forearm (let alone other parts of the
body). Using combinations of on- and off-skin input, such as with WatchSense [46], the expressive-
ness of commands could further be increased. This challenge also concerns how to use grabbing,
pulling, scratching, and other deformations of the skin for expressive input. Whereas this forms
the topic of several papers, we do not know which deformations are useful, or how many distinct

ACM Computing Surveys, Vol. 52, No. 4, Article 77. Publication date: August 2019.
77:10 J. Bergström and K. Hornbæk

commands can be communicated with deformations. Evaluating user performance with such in-
puts could help in characterizing the expressivity across different locations and input types on the
skin.
A second challenge is to increase the tracking resolution and accuracy in detecting input; cur-
rently, we do not know how large an area and how many targets or commands can be effectively
sensed. Tracking performance (e.g., the sensing accuracy across the number of targets) is one of
the basic measures of any input technology, and is necessary information for choosing which
technology to use for skin input. However, the studies rarely measured sensing performance sep-
arately from user performance. Following the approach of Lim et al. [24], the sensing accuracies
could be measured with robots. This would help in recognizing the suitable sensors early before
unnecessary work on machine learning and classification methods that are only needed for final
applications to interpret real user input. Further, the resolutions and tracking accuracies of current
sensing technologies could be improved by combining sensors. Optical sensing has already been
successfully combined with other sensing technologies to improve recognition rates of tapping
input [8, 20, 37, 38, 42, 45, 49]. Combining sensors could also allow tracking multiple input types,
such as deformation in addition to tapping.

5.2 Real-Life Use of Skin-Based Interfaces


The skin is inherently a mobile and a personal interface. Using the skin poses new challenges for
sensing technologies, but also new opportunities as it is an always-on surface, that is larger but
leaves more physical resources free for input compared to hand-held devices. One important set
of research questions concerns understanding how useful these benefits can be in real life.
First, we need to understand how accessible and effective skin input can be on the move. The
studies suggest that the skin can be more accessible and usable on the move than external input
surfaces because feeling touch on the skin can compensate for lack of visual feedback [8, 11, 26,
38, 49]. No study, however, has examined the effectiveness of skin input types with moving users.
Evaluating physical engagement of the hands (e.g., using [40]) could reveal whether the input
types that allow continuous contact and therefore more stable hands, such as deforming input,
can be more accurate in mobile conditions than tapping. Microgestures, such as touch gestures on
a fingertip [5], or pressing the side of the index finger with the thumb similarly to using a laptop
trackpoint, could also perform well on the move while not being visible to other people. Currently
it remains unclear if these benefits materialize in real-life use.
For on-skin input to be beneficial in real-life use, we need robust sensing for mobile conditions.
Most of the current prototypes of skin-based touch interfaces fix the user’s pose to maintain an ac-
ceptable level of tracking and projection accuracy [9, 11, 26, 32, 49]. But fixing the hand invalidates
most of the reasons to use the skin instead of existing devices. Thus, evaluating the technologies
with a free hand posture is necessary for finding suitable technical solutions to sense input on the
skin in real application areas; that has currently not been done.
The third challenge is to ensure tolerance to other aspects of real-world use, in particular to de-
velop sensing that works even if parts or all of the input area on the skin is covered. While we have
focused on interaction on bare skin, several projects have developed electronic tattoos that cover
the skin [17, 28, 54]. These tattoos reap many benefits of skin input, such as passive haptic feed-
back. Such sensors, as well as interactive clothing, are rather wearable than on-skin interfaces,
but sensing skin input through clothing is an area worth exploring to address applicability of
on-skin interfaces to different use contexts. Moreover, a gracious transition from on-skin to
covered-skin interaction also seems important for real-world usage.

ACM Computing Surveys, Vol. 52, No. 4, Article 77. Publication date: August 2019.
Human–Computer Interaction on the Skin 77:11

5.3 Designing Skin Input and Output


Input on the skin presents new opportunities because the body’s shape, texture, and social role be-
comes a material for designing input and output, quite unlike designing for input on a touchscreen.
Whereas much work has enumerated these opportunities, many research challenges remain in
mapping input to the skin, and to external output.
One such challenge is to understand how location on the skin affects input and output. Projec-
tion can be used for output and for displaying the interface directly on the skin [12]. The studies
show, however, that surface features and forms such as curvature and softness affect the effec-
tiveness of input. For example, discrete touch input seems to perform worse on curved locations
[14], and the possible magnitude of deformations vary across different locations on the skin [53].
Location is likely to influence the user’s perception of input and output as well as their social
acceptability. Currently, these remain open questions.
The second challenge for designing skin-based interfaces is to understand how to map interface
elements and layouts onto the skin. The studies reviewed show findings supporting both grid
layouts, similar to touchscreens, and layouts fitted to the features of the skin and the shape of the
body. For example, users often prefer familiar grids [8] but perform better on layouts designed for
the skin [49]. Therefore, it is important to examine how visual targets should be mapped onto the
skin and how to balance between grids and body-based shapes.
Unlike direct projection [12], output displayed on an external interface poses a third challenge
to mapping because that involves alignment between two surfaces: the displayed layout and the
layout of input on the skin. Methods to effectively map indirect input exist and are applied, for in-
stance, in the control-display ratios of mouse and trackpad interfaces. Such straightforward trans-
formations, however, are not applicable to the skin because of its distinct shape and the body’s
varying alignment with external displays. The studies show, that in addition to locating the tar-
gets based on the features of the skin, users also align the touch surfaces to match external visual
layouts [8]. Design of external output would therefore benefit from models describing the intuitive
and effective mappings between skin input and external displays.
The fourth challenge for design concerns how to use haptic feedback for on-skin input. Visual
and haptic cues on the skin appear to help the user in finding target locations [1, 11], but no study
systematically examined the usage of haptic landmarks. The feedback perceived from touching the
skin has been suggested to guide touch. Finding ways to benefit from haptic feedback is important,
as well as developing new kinds of haptic feedback that users can sense on bare skin, such as those
produced with Electrical Muscle Stimulation [30].
The fifth challenge for skin input is to understand what meanings touch carries to the user.
Some studies have examined which input types participants preferred for given commands or
emotions [38, 45, 53]. For example, Weigel et al. [53] found that continuous tapping was preferred
to communicate alertness, such as forcing or emergency. No study, however, has examined what
meanings skin input triggers in the user. This is important to do because understanding meanings
would help in designing intuitive input and in avoiding unintentional meanings of particular touch
input; for instance, to not make the user feel alert when tapping.

6 CONCLUSION
The skin is a wonderful opportunity for interaction designers and researchers. Touching the skin
is a natural and important means of communication. Touch has strong effects on well-being and
health, and on how people behave and feel about other people, products, and services [2, 43].
People naturally use their skin, for instance, to communicate participation to a sports team with
face paint, to visualize important things with tattoos, and to remind themselves of a call by writing

ACM Computing Surveys, Vol. 52, No. 4, Article 77. Publication date: August 2019.
77:12 J. Bergström and K. Hornbæk

a phone number on the hand [47]. The skin can offer an expressive, personal, touch sensitive,
always-on input surface for human–computer interaction. Leveraging skin-specific characteristics
in designing user interfaces is exciting but also challenging.
The purpose of this article was to synthesize what skin input is, which technologies can sense
input on the skin, and how to design interfaces for the skin. The reviewed studies show that input
on the skin already works; touch was used for controlling menus, giving commands in games,
drawing symbols, typing, controlling music players, and communicating emotions. The largest
challenges are technical, in ensuring robust high-resolution tracking of input also in mobile real-
life use. The largest opportunities are in making use of the expressive features of the skin, such as
its large surface, deformation, landmarks, and new types of feedback. We hope that by addressing
challenges for research in human–computer interaction on the skin, this article also helps in de-
veloping existing technologies further and in designing expressive and effective input types and
interfaces for the skin.

REFERENCES
[1] Joanna Bergstrom-Lehtovirta, Sebastian Boring, and Kasper Hornbæk. 2017. Placing and recalling virtual items on
the skin. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, 1497–1507.
[2] S. Adam Brasel and James Gips. 2014. Tablets, touchscreens, and touchpads: How varying touch interfaces trigger
psychological ownership and endowment. Journal of Consumer Psychology 24, 2 (2014), 226–233.
[3] Jesse Burstyn, Paul Strohmeier, and Roel Vertegaal. 2015. DisplaySkin: Exploring pose-aware displays on a flexible
electrophoretic wristband. In Proceedings of the 9h International Conference on Tangible, Embedded, and Embodied
Interaction. ACM, 165–172.
[4] Liwei Chan, Yi-Ling Chen, Chi-Hao Hsieh, Rong-Hao Liang, and Bing-Yu Chen. 2015. Cyclopsring: Enabling whole-
hand and context-aware interactions through a fisheye ring. In Proceedings of the 28th Annual ACM Symposium on
User Interface Software and Technology. ACM, 549–556.
[5] Liwei Chan, Rong-Hao Liang, Ming-Chang Tsai, Kai-Yin Cheng, Chao-Huai Su, Mike Y. Chen, Wen-Huang Cheng,
and Bing-Yu Chen. 2013. FingerPad: Private and subtle interaction using fingertips. In Proceedings of the 26th Annual
ACM Symposium on User Interface Software and Technology. ACM, 255–260.
[6] David Coyle, James Moore, Per Ola Kristensson, Paul Fletcher, and Alan Blackwell. 2012. I did that! Measuring users’
experience of agency in their own actions. In Proceedings of the SIGCHI Conference on Human Factors in Computing
Systems. ACM, 2025–2034.
[7] Artem Dementyev and Joseph A. Paradiso. 2014. WristFlex: Low-power gesture input with wrist-worn pressure sen-
sors. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology. ACM, 161–166.
[8] Niloofar Dezfuli, Mohammadreza Khalilbeigi, Jochen Huber, Murat Özkorkmaz, and Max Mühlhäuser. 2014. PalmRC:
Leveraging the palm surface as an imaginary eyes-free television remote control. Behaviour & Information Technology
33, 8 (2014), 829–843.
[9] Madeline Gannon, Tovi Grossman, and George Fitzmaurice. 2015. Tactum: A skin-centric approach to digital design
and fabrication. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM,
1779–1788.
[10] Sean Gustafson, Christian Holz, and Patrick Baudisch. 2011. Imaginary phone: Learning imaginary interfaces by
transferring spatial memory from a familiar device. In Proceedings of the 24th Annual ACM Symposium on User Inter-
face Software and Technology. ACM, 283–292.
[11] Sean G. Gustafson, Bernhard Rabe, and Patrick M. Baudisch. 2013. Understanding palm-based imaginary interfaces:
The role of visual and tactile cues when browsing. In Proceedings of the SIGCHI Conference on Human Factors in
Computing Systems. ACM, 889–898.
[12] Chris Harrison, Hrvoje Benko, and Andrew D. Wilson. 2011. OmniTouch: Wearable multitouch interaction every-
where. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology. ACM, 441–450.
[13] Chris Harrison, Shilpa Ramamurthy, and Scott E. Hudson. 2012. On-body interaction: Armed and dangerous. In
Proceedings of the 6th International Conference on Tangible, Embedded and Embodied Interaction. ACM, 69–76.
[14] Chris Harrison, Desney Tan, and Dan Morris. 2010. Skinput: Appropriating the body as an input surface. In Proceed-
ings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 453–462.
[15] Hayati Havlucu, Mehmet Yarkın Ergin, Idil Bostan, Oğuz Turan Buruk, Tilbe Göksun, and Oğuzhan Özcan. 2017. It
made more sense: Comparison of user-elicited on-skin touch and freehand gesture sets. In International Conference
on Distributed, Ambient, and Pervasive Interactions. Springer, 159–171.

ACM Computing Surveys, Vol. 52, No. 4, Article 77. Publication date: August 2019.
Human–Computer Interaction on the Skin 77:13

[16] Christian Holz, Tovi Grossman, George Fitzmaurice, and Anne Agur. 2012. Implanted user interfaces. In Proceedings
of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 503–512.
[17] Hsin-Liu Cindy Kao, Christian Holz, Asta Roseway, Andres Calvo, and Chris Schmandt. 2016. DuoSkin: Rapidly proto-
typing on-skin user interfaces using skin-friendly materials. In Proceedings of the 2016 ACM International Symposium
on Wearable Computers. ACM, 16–23.
[18] Takashi Kikuchi, Yuta Sugiura, Katsutoshi Masai, Maki Sugimoto, and Bruce H. Thomas. 2017. EarTouch: Turning
the ear into an input surface. In Proceedings of the 19th International Conference on Human-Computer Interaction with
Mobile Devices and Services. ACM, 27.
[19] Scott Klemmer. 2011. Technical perspective: Skintroducing the future. Communications of the ACM 54, 8 (2011).
[20] Jarrod Knibbe, Diego Martinez Plasencia, Christopher Bainbridge, Chee-Kin Chan, Jiawei Wu, Thomas Cable, Hassan
Munir, and David Coyle. 2014. Extending interaction for smart watches: Enabling bimanual around device control.
In CHI’14 Extended Abstracts on Human Factors in Computing Systems. ACM, 1891–1896.
[21] Jarrod Knibbe, Sue Ann Seah, and Mike Fraser. 2014. VideoHandles: Replicating gestures to search through action-
camera video. In Proceedings of the 2nd ACM Symposium on Spatial User Interaction. ACM, 50–53.
[22] Gierad Laput, Robert Xiao, Xiang’Anthony’ Chen, Scott E. Hudson, and Chris Harrison. 2014. Skin buttons: Cheap,
small, low-powered and clickable fixed-icon laser projectors. In Proceedings of the 27th Annual ACM Symposium on
User Interface Software and Technology. ACM, 389–394.
[23] Juyoung Lee, Hui Shyong Yeo, Murtaza Dhuliawala, Jedidiah Akano, Junichi Shimizu, Thad Starner, Aaron Quigley,
Woontack Woo, and Kai Kunze. 2017. Itchy nose: Discreet gesture interaction using EOG sensors in smart eyewear. In
Proceedings of the 29th ACM International Symposium on Wearable Computers (ISWC ’17). Association for Computing
Machinery.
[24] Soo-Chul Lim, Jungsoon Shin, Seung-Chan Kim, and Joonah Park. 2015. Expansion of smartwatch touch interface
from touchscreen to around device interface using infrared line image sensors. Sensors 15, 7 (2015), 16642–16653.
[25] Jhe-Wei Lin, Chiuan Wang, Yi Yao Huang, Kuan-Ting Chou, Hsuan-Yu Chen, Wei-Luan Tseng, and Mike Y. Chen.
2015. Backhand: Sensing hand gestures via back of the hand. In Proceedings of the 28th Annual ACM Symposium on
User Interface Software and Technology. ACM, 557–564.
[26] Shu-Yang Lin, Chao-Huai Su, Kai-Yin Cheng, Rong-Hao Liang, Tzu-Hao Kuo, and Bing-Yu Chen. 2011. Pub-point upon
body: Exploring eyes-free interaction and methods on an arm. In Proceedings of the 24th Annual ACM Symposium on
User Interface Software and Technology. ACM, 481–488.
[27] Roman Lissermann, Jochen Huber, Aristotelis Hadjakos, Suranga Nanayakkara, and Max Mühlhäuser. 2014. EarPut:
Augmenting ear-worn devices for ear-based interaction. In Proceedings of the 26th Australian Computer-Human In-
teraction Conference on Designing Futures: the Future of Design. ACM, 300–307.
[28] Joanne Lo, Doris Jung Lin Lee, Nathan Wong, David Bui, and Eric Paulos. 2016. Skintillates: Designing and creating
epidermal interactions. In Proceedings of the 2016 ACM Conference on Designing Interactive Systems. ACM, 853–864.
[29] Christian Loclair, Sean Gustafson, and Patrick Baudisch. 2010. PinchWatch: A wearable device for one-handed mi-
crointeractions. In Proceedings of MobileHCI, Vol. 10.
[30] Pedro Lopes, Doăa Yüksel, François Guimbretiere, and Patrick Baudisch. 2016. Muscle-plotter: An interactive system
based on electrical muscle stimulation that produces spatial output. In Proceedings of the 29th Annual Symposium on
User Interface Software and Technology. ACM, 207–217.
[31] Yasutoshi Makino, Yuta Sugiura, Masa Ogata, and Masahiko Inami. 2013. Tangential force sensing system on forearm.
In Proceedings of the 4th Augmented Human International Conference. ACM, 29–34.
[32] Adiyan Mujibiya, Xiang Cao, Desney S. Tan, Dan Morris, Shwetak N. Patel, and Jun Rekimoto. 2013. The sound of
touch: On-body touch and gesture sensing based on transdermal ultrasound propagation. In Proceedings of the 2013
ACM International Conference on Interactive Tabletops and Surfaces. ACM, 189–198.
[33] Kei Nakatsuma, Rhoma Takedomi, Takaaki Eguchi, Yasutaka Oshima, and Ippei Torigoe. 2015. Active bioacoustic
measurement for human-to-human skin contact area detection. In 2015 IEEE SENSORS. IEEE, 1–4.
[34] Masa Ogata and Michita Imai. 2015. SkinWatch: Skin gesture interaction for smart watch. In Proceedings of the 6th
Augmented Human International Conference. ACM, 21–24.
[35] Masa Ogata, Yuta Sugiura, Yasutoshi Makino, Masahiko Inami, and Michita Imai. 2013. SenSkin: Adapting skin as a
soft interface. In Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology. ACM,
539–544.
[36] Masa Ogata, Yuta Sugiura, Yasutoshi Makino, Masahiko Inami, and Michita Imai. 2014. Augmenting a wearable dis-
play with skin surface as an expanded input area. In International Conference of Design, User Experience, and Usability.
Springer, 606–614.
[37] Uran Oh and Leah Findlater. 2014. Design of and subjective response to on-body input for people with visual im-
pairments. In Proceedings of the 16th International ACM SIGACCESS Conference on Computers and Accessibility. ACM,
115–122.

ACM Computing Surveys, Vol. 52, No. 4, Article 77. Publication date: August 2019.
77:14 J. Bergström and K. Hornbæk

[38] Uran Oh and Leah Findlater. 2015. A performance comparison of on-hand versus on-phone nonvisual input by blind
and sighted users. ACM Transactions on Accessible Computing (TACCESS) 7, 4 (2015), 14.
[39] Ryosuke Ono, Shunsuke Yoshimoto, and Kosuke Sato. 2013. Palm+ Act: Operation by visually captured 3D force on
palm. In SIGGRAPH Asia 2013 Emerging Technologies. ACM, 14.
[40] Antti Oulasvirta and Joanna Bergstrom-Lehtovirta. 2011. Ease of juggling: Studying the effects of manual multitask-
ing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 3103–3112.
[41] Manuel Prätorius, Aaron Scherzinger, and Klaus Hinrichs. 2015. SkInteract: An On-body interaction system based
on skin-texture recognition. In Human-Computer Interaction. Springer, 425–432.
[42] Manuel Prätorius, Dimitar Valkov, Ulrich Burgbacher, and Klaus Hinrichs. 2014. DigiTap: An eyes-free VR/AR sym-
bolic input device. In Proceedings of the 20th ACM Symposium on Virtual Reality Software and Technology. ACM, 9–18.
[43] Zdravko Radman. 2013. The Hand, an Organ of the Mind: What the Manual Tells the Mental. MIT Press, 108–119.
[44] Munehiko Sato, Ivan Poupyrev, and Chris Harrison. 2012. Touché: Enhancing touch interaction on humans, screens,
liquids, and everyday objects. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM,
483–492.
[45] Marcos Serrano, Barrett M. Ens, and Pourang P. Irani. 2014. Exploring the use of hand-to-face input for interacting
with head-worn displays. In Proceedings of the 32nd Annual ACM Conference on Human Factors in Computing Systems.
ACM, 3181–3190.
[46] Srinath Sridhar, Anders Markussen, Antti Oulasvirta, Christian Theobalt, and Sebastian Boring. 2017. WatchSense:
On-and above-skin input sensing through a wearable depth sensor. In Proceedings of the 2017 CHI Conference on
Human Factors in Computing Systems. ACM, 3891–3902.
[47] Paul Strohmeier, Juan Pablo Carrascal, and Kasper Hornbæk. 2016. What can doodles on the arm teach us about
on-body interaction? In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing
Systems. ACM, 2726–2735.
[48] Ying-Chao Tung, Chun-Yen Hsu, Han-Yu Wang, Silvia Chyou, Jhe-Wei Lin, Pei-Jung Wu, Andries Valstar, and Mike
Y. Chen. 2015. User-defined game input for smart glasses in public space. In Proceedings of the 33rd Annual ACM
Conference on Human Factors in Computing Systems. ACM, 3327–3336.
[49] Cheng-Yao Wang, Wei-Chen Chu, Po-Tsung Chiu, Min-Chieh Hsiu, Yih-Harn Chiang, and Mike Y. Chen. 2015. Palm-
Type: Using palms as keyboards for smart glasses. In Proceedings of the 17th International Conference on Human-
Computer Interaction with Mobile Devices and Services. ACM, 153–160.
[50] Cheng-Yao Wang, Min-Chieh Hsiu, Po-Tsung Chiu, Chiao-Hui Chang, Liwei Chan, Bing-Yu Chen, and Mike Y. Chen.
2015. PalmGesture: Using palms as gesture interfaces for eyes-free input. In Proceedings of the 17th International
Conference on Human-Computer Interaction with Mobile Devices and Services. ACM, 217–226.
[51] Yuntao Wang, Chun Yu, Lin Du, Jin Huang, and Yuanchun Shi. 2014. BodyRC: Exploring interaction modalities using
human body as lossy signal transmission medium. In Proceedings of the 2014 IEEE 11th International Conference on
Ubiquitous Intelligence and Computing, IEEE 11th International Conference on Autonomic and Trusted Computing, and
IEEE 14th International Conference on Scalable Computing and Communications and Its Associated Workshops (UTC-
ATC-ScalCom’14). IEEE, 260–267.
[52] Martin Weigel, Tong Lu, Gilles Bailly, Antti Oulasvirta, Carmel Majidi, and Jürgen Steimle. 2015. Iskin: Flexible,
stretchable and visually customizable on-body touch sensors for mobile computing. In Proceedings of the 33rd Annual
ACM Conference on Human Factors in Computing Systems. ACM, 2991–3000.
[53] Martin Weigel, Vikram Mehta, and Jürgen Steimle. 2014. More than touch: Understanding how people use skin as an
input surface for mobile computing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems.
ACM, 179–188.
[54] Martin Weigel, Aditya Shekhar Nittala, Alex Olwal, and Jürgen Steimle. 2017. SkinMarks: Enabling interactions on
body landmarks using conformal skin electronics. In Proceedings of the 2017 CHI Conference on Human Factors in
Computing Systems. ACM, 3095–3105.
[55] Yang Zhang, Junhan Zhou, Gierad Laput, and Chris Harrison. 2016. Skintrack: Using the body as an electrical waveg-
uide for continuous finger tracking on the skin. In Proceedings of the 2016 CHI Conference on Human Factors in Com-
puting Systems. ACM, 1491–1503.

Received May 2018; revised April 2019; accepted May 2019

ACM Computing Surveys, Vol. 52, No. 4, Article 77. Publication date: August 2019.

You might also like