Dix Human Computer Interaction J Visual Language Computing 2016
Dix Human Computer Interaction J Visual Language Computing 2016
DOI:
10.1016/j.jvlc.2016.04.001
License:
Creative Commons: Attribution-NonCommercial-NoDerivs (CC BY-NC-ND)
Document Version
Peer reviewed version
General rights
Unless a licence is specified above, all rights (including copyright and moral rights) in this document are retained by the authors and/or the
copyright holders. The express permission of the copyright holder must be obtained for any use of this material other than for purposes
permitted by law.
•Users may freely distribute the URL that is used to identify this publication.
•Users may download and/or print one copy of the publication from the University of Birmingham research portal for the purpose of private
study or non-commercial research.
•User may use extracts from the document in line with the concept of ‘fair dealing’ under the Copyright, Designs and Patents Act 1988 (?)
•Users may not further distribute the material nor use it for the purposes of commercial gain.
Where a licence is displayed above, please note the terms and conditions of the licence govern your use of this document.
If you believe that this is the case for this document, please contact [email protected] providing details and we will remove access to
the work immediately and investigate.
Alan Dix
www.elsevier.com/locate/jvlc
PII: S1045-926X(16)30008-8
DOI: http://dx.doi.org/10.1016/j.jvlc.2016.04.001
Reference: YJVLC748
To appear in: Journal of Visual Language and Computing
Received date: 19 January 2016
Accepted date: 18 April 2016
Cite this article as: Alan Dix, Human computer interaction, foundations and new
p a r a d i g m s , Journal of Visual Language and Computing,
http://dx.doi.org/10.1016/j.jvlc.2016.04.001
This is a PDF file of an unedited manuscript that has been accepted for
publication. As a service to our customers we are providing this early version of
the manuscript. The manuscript will undergo copyediting, typesetting, and
review of the resulting galley proof before it is published in its final citable form.
Please note that during the production process errors may be discovered which
could affect the content, and all legal disclaimers that apply to the journal pertain.
Human Computer Interaction,
foundations and new paradigms
Alan Dix
University of Birmingham and Talis
Abstract
This paper explores the roots of human computer interaction as a discipline, the
various trends which have marked its development and some of the current and
future challenges for research. Human–computer interaction, like any vocational
discipline, sits upon three broad foundations: theoretical principles, professional
practice and a community of people. As an interdisciplinary field the theoretical
roots of HCI encompass a number of other disciplines including psychology and
computing, ergonomics, and social sciences; however, it also has theoretical and
practical challenges of its own. The evolving internal and external context of HCI
computers have become smaller and less costly, this has led to changes in nature
of the users and uses of computers, with corresponding impact on society. The
paper explores the current challenges of computers from the cloud to digital
fabrication and the need to design for solitude. It suggests that HCI cannot just
react to the changes around it, but also shape those changes.
Keywords
human–computer interaction; history, ubiquitous computing; cloud-computing;
design for solitude; digital fabrication
Introduction
From 'foundations' to 'new paradigms' is a wide canvas and this paper attempts
to paint a picture of human–computer interaction from its earliest roots to future
challenges. It is also iconic in that HCI as an academic discipline has always been
positioned, sometimes uneasily, sometimes creatively, in the tension between
solid intellectual rigour and the excitement in new technology. Stefano Levialdi,
in who this special issue is in honour, had a rich appreciation of both and so I
hope this paper is one that he would have enjoyed as well as offering an
overview of the field as it was, as it is and as it could be.
Foundations
Human–computer interaction, like any vocational discipline, sits upon three
broad foundations.
Principles – First, and most obviously are the intellectual theories, models and
empirical investigations that underlie the field. Give HCI's cross-disciplinary
nature, some of these come form a number of related disciplines and some as
core HCI knowledge.
Practice – Second, HCI is a field that, inter alia, seeks to offer practical guidance
to practitioners in interaction design, usability, UX, or whatever becomes the
next key term. However, also it is a discipline that has always sought to learn
form the practical design and innovations that surround it.
People – Finally, there are the visionaries who inspire the field and perhaps most
importantly the HCI community itself: researchers, educators and practitioners.
I will not attempt to separate these three in the following sections as they are all
deeply intertwined in both the history and current state of HCI.
The interplay between the first two is central to the long-standing discussion of
the nature of HCI originally posed by Long and Dowell [LD89]: is it a science,
engineering or craft discipline? However, when I addressed the scientific
credentials of HCI in my own response to this work [Dx10] in the IwC Festschrift
for John Long, I found myself addressing as much the nature and dynamics of the
academic community as the literature itself.
I will not reprise the arguments here, but the importance of the community is a
message that is also central to Stefano's legacy. As well as deeply humane
person at a one-to-one level, his contribution to the development of the Italian
HCI community, and the founding of the AVI conference series have been of
importance to many individuals as well as the academic growth of the field. It is
not that the archival written outputs are not critical, indeed Stefano's role in
JVLC is evidence of that, but that scientific outputs are always the result of a
human process.
Historic Roots
HCI developed as a discipline and a community in the early 1980s, triggered
largely by the PC revolution and the mass use of office computers. It was in the
early 1980s when the major HCI conferences began Interact, CHI, British HCI and
Vienna HCI; all but the last still active today. Core concepts were also formulated
in those days including the notion of direct manipulation and user centred
design [Sc83,ND86].
However, while the identifiable discipline began in the 1980s, the intellectual
roots can be traced back at least 25 years earlier.
The graphical user interface and desktop metaphor, embodied in the early Apple
Mac, were the result of work at Xerox PARC throughout the 1970s, mostly based
around graphical programming environments such as Smalltalk and InterLisp,
and leading to the design of the Xerox Star office computer [SK82, JR89], a
conceptual breakthrough albeit a commercial failure.
However, the very first true HCI paper dates back into the late 1950s, with
Shackel's 'Ergonomics for a Computer' [Sh59]. While Sutherland and Englebert
were early examples of the vision/innovation side of HCI, Shackel's first HCI
paper came more from a practical design perspective, the redesign of the control
panel of EMIAC II, an early analogue computer.
Although the computer was analogue not digital, and the controls knobs and
patch-panels, not mice or keyboards, many of the principles of practical usability
engineering can be seen in this very earliest HCI paper including prototyping,
empirical testing, visual grouping, and simplifying design. Furthermore this very
practical work rooted itself in earlier theoretical work in ergonomics and applied
experimental psychology, in many ways prefiguring the discipline we know
today.
In the earliest days the main disciplines involved in HCI were computer science,
psychology and ergonomics, as reflected in Shackel's early paper. However,
these disciplines were soon joined by social science, or, to be more precise, the
ethnographic and anthropological side of sociology.
The input from ergonomics was initially in terms of physical ergonomics, sitting
at a computer terminal, pressing keys; however, this more physical side of HCI
declined rapidly as computers became commoditised as opposed to being in
special settings and issues of physical ergonomics were relegated to health and
safety concerns. To some extent this followed from the natural development of
the area, once computers were mass-produced, practitioners had little control of
the physical system unless they worked for major manufacturers. However,
users have suffered from this loss of ergonomic input: many laptops and other
devices sacrificed physical ergonomics for surface aesthetics, as a generation of
RSI sufferers will attest! Happily, in more recent years, issues of physical design
have resurfaced with interest in tangible computing and strong research
connections developing with product design.
A few years ago Clare Hooper and I looked at the relationship between HCI and
web science [HD12, HD13]. Although there are core differences in scope and
focus, there are strong overlaps between the two. We drew on the web science
'butterfly', which includes all the disciplines that web science draws on. This
was remarkably similar to those that connect with HCI differing mostly in the
'heat map' of those most active or relevant (see figs 1 and 2).
Borrowings from other fields have been very powerful to enable both theoretical
and practical interventions. For example, Fitts' Law [Fi54] has created its own
small sub-community, human–human conversation analysis has been used to
design human–computer dialogues [FL90], and Csikszentmihályi's concept of
Flow [Cz90] has proved influential in user experience design.
However, while HCI can draw on the methods and knowledge of related fields
directly, there are limits to this for two reasons:
different concerns – The questions we ask in HCI are typically more applied and
hence more complex in terms of interrelations than 'base' disciplines, notably
psychology. For example, early studies of on-screen reading comprehension, or
more recent comparisons of reading comprehension when holding a screen vs
with hands on the table [BJ11]; while in many ways these could be seen as
standard perceptual and cognitive psychology, the reasons for studying both
were practical and unlikely to have arisen purely from a psychological interest.
However, while there is copious empirical work of this kind, it is harder to find
truly integrative HCI theory. There were early descriptive accounts, notably
Norman's seven stages of action [No86,No88], and more predictive modelling
approaches such as Card, Moran and Newells' 'Model Human Processor' [CM86]
and Barnard's 'Interacting Cognitive Subsystems' [Ba85], but, while the former is
still influential, there is no clear path of deepening theory of interaction.
"As it stands, the only tradition in HCI is that of having no tradition in terms of
research topics. … when a new technology comes along it seems that researchers
start from scratch leading to relatively isolated research themes" [LG14]
This creates new challenges for the discipline, but also opens up a longer history
of human innovation and evolution of technology. That is in understating the
foundations of HCI we can draw on millennia, not just the mere thirty to fifty
years of digital development (rich and rapid that it has been).
Ogburn and Gilfillan [OG33] were some of the earliest modern historians of
technology, and in the 1930s were reflecting on recent decades, which would
have seemed as revolutionary as our own (fig. 3). They, and more recent
commentators such as Basalla [Ba88] and Arthur [Ar09], emphasise the
continuity of technological change in contrast to what are often described as
'heroic' theories of invention focusing on great individuals.
However, this does not mean that the social ramifications of these inventions are
not significant. Reflecting on medieval technological change, White [Wh66]
argues that the invention of the stirrup not only radically altered warfare, but its
effects rippled through to agriculture (because of the breeding of large horses
which the replaced oxen) and fundamentally changed the social order as the
feudal system developed to create units capable of 'servicing' the horsed knight.
Looking back further still, there are arguments that human cognition developed
in part due to a sort of co-evolution with technology; for example, Calvin [Ca91]
argues that the development of the stone axe as a throwing weapon developed
both manual dexterity that enabled future tool development; and, in the end
more importantly, mental sills for fine sequencing that enabled language and
logical thought.
While this sounds far from current HCI, we should consider Fitts' Law [Fi54],
which states that the time taken to move to a target (such as moving a mouse to
select an onscreen button), is proportional to the log of the ratio of distance and
target size. This has been one of the enduring and most celebrated uses of basic
psychological theory in user interface theory and design, from its incorporation
Fitts' original experiments were with a stylus and HCI experiments are almost
always with some sort of artificial cursor. That is Fitts' Law is a law of the
artificially (or cybernetically) extended human body. The wonder is that we can
control such devices, but this is because the earliest Homo Sapiens were tool
users; we have always been cyborgs!
How many?
It is often said that in 1953 IBM believed that there would never need to be more
then five computers in the world. While this turns out to be a misquote (see
footnote 8 in [Dx10]), it is still true that in the early days the room-sized
computers were envisaged as something that would only be needed by very
large organisations. In this light, the decision, even in the late 1970s, to use 32
bit IP addresses [Po81], nearly one for every human being in the planet at that
time, appears prescient.
Of course, there are now far more than 4 billion “It would be easy to say the
people on the planet and mobile phones alone modern car is a computer
(each with a computer more powerful than the on wheels, but it’s more
1953 IBM 701) outnumber people [Bo14]. After like 30 or more computers
many years, IP v5 is being fully deployed, with 64 on wheels” [Mo10]
As well as being big numbers and creating new challenges for network routing,
this scale changes the nature of HCI. Although we have sketched the origins of
HCI onto the late 1950s, the discipline was formed with the rise of the desktop
PC, one computer per person, the age of IPv4. As we contemplate thousands of
computers per person, it is not clear that the old metaphors hold. This is partially
the fulfilment of Weisier's vision of ubiquitous computing [We91], but partly
going way beyond in terns of scale, both large and small.
Weirdly, just as computers have shrunk and proliferated, there is also a counter
move to recentralise. While the internet giants are not operating single
computers, a significant proportion of the world's computation, and certainly
network traffic, happens in a handful of corporate distributed server farms.
Who?
In the early days of computing, the 1960s and 1970s, before HCI emerged as a
discipline, computer users were of two very different kinds. The creators of
software (programming and design) were mid-level employees, and relatively
well educated, although even then split very much between those involved in the
design and creation of computers and operating software, and those involved in
business programming. In contrast the direct users of computer software were
often low level, low paid, and involved in relatively repetitive jobs such as data
entry. The dominant professional interest in this was concerned with physical
ergonomics, a Taylorist desire to ensure that workers were as productive as
possible.
In contrast, the desktop PC radically changed the nature of the end user, shifting
to professional, clerical, and middle management and, to some extent, more
creative and intellectual work. The development of HCI is usually seen as the
reaction to this more individual form of computation. Alternatively, more
cynically, this could be seen less a humanist agenda and more to do with the
changing costs between computer and user: a shift from cheap labour using
expensive computers to cheap computers used by expensive employees.
The new millennium, and not least the rise of the web, has meant that now the
end user is everyone.
Even in developed countries the best access tends to be focused on the more
affluent and able. This is emphasised by the changing demographics of many
developed countries in many of which the retired population is expected to
outnumber those in work. These aging populations will increase the need for
user interfaces and systems that continue to function even as human perceptual,
physical and mental function degrades. The ASSETS community has long served
those who by birth, accident, or age do not share the same abilities as the 'norm',
the work in this area was always important, but will grow more so.
During my thousand mile walk around Wales in 2013, it was rare to find even
usable GSM mobile signal, let along 3G which was only accessible in major cities
[MD14], and during the walk a Welsh government report found that 50% of
schools said that poor Internet connectivity was hampering education [Es13].
However, it is not just rural areas, which suffer; a report commissioned by the
Royal Society of Edinburgh showed that internet bandwidth was strongly
correlated with other measures of social depravation [FA13] – digital technology
widens existing social divisions.
Some of the issues are about government policy and economics, but as a
community we cannot simply wait for social change. Interface and digital design
makes a difference, sadly often for the worse. In the first journal paper on HCI
issues for mobile systems [Dx95], I looked not at issues of screen size, but
intermittent connectivity. Twenty years later, walking the margins of Wales, it
was poor design for low connectivity, not the low connectivity itself, which was
often the main issue: major software failed in predictable and avoidable ways
[Dx13].
Of course even if everyone can use software, those who can create it are few –
the gap between programmers and users is nearly as large as it ever was. Of
course, part of being a large-scale consumer society is that may of the things we
use are beyond our skills to make or even modify – when a plastic spoon breaks
you throw it away, when your car breaks down, you call for a mechanic.
While computers were something one used occasionally, this argument perhaps
seemed valid; however, when everything is controlled by computers and is
interlinked, the ability to be able to understand and modify, at least to some
extent, becomes more important. That is, general computer literacy and end-
user programming move from being marginal interests to centre stage. Stefano
had a long interest in visual languages (hence this journal). These may be used
for sophisticated purposes, but often lie behind some of the most widely used
educational and end-user programming systems (e.g. Scratch, Max/MSP).
What for?
Along with the change in users, those who have been in HCI since the early days
have seen a dramatic change in the purpose of the systems being developed.
In the early days, from the first computer systems through to the focus on
desktop PCs in the 1980s, and CSCW in the 1990s, the focus was on computers
for work. This was sometimes realised in more Taylorist forms of task analysis
[DS04], sometimes in more interpretative ethnographic studies [Su87],
sometimes in more democratic participatory approaches [Gr03, MK93], but the
aim was principally to help make work more productive, and possible more
enjoyable too (especially if that made it more productive). While 'satisfaction'
was always part of the early definitions of usability, it was almost always in f
efficiency and effectiveness which took centre stage.
Although work-centred systems are still important, a key change in HCI was
when computation entered leisure and home-centred systems. The market for
social networks, satellite navigation, smart phones and smart TVs is no longer
the corporate buying for its workers, but consumers buying for themselves. This
shift from employer-determined to self-determined choices of systems drove in no
small part the shift from efficiency and user interface design, to emotion and
user experience design.
However, we are in the midst of another shift, perhaps equally profound. The
ubiquity and (near) universality of internet access means that many common
services are becoming largely or solely online access. Many goods are cheaper if
purchased online, airlines often expect that boarding passes are downloaded and
printed before arriving at the airport, music and movies are streamed. In the
face of budget cuts the BBC is moving several broadcast channels to be digital
only, and many expect that printed news media will eventually disappear.
Furthermore, in many countries government and heath services are increasingly
online.
That is, the very structure of life is increasingly computational and networked,
and this is not optional. For example, in the UK welfare payments are being
moved to a new system of 'universal benefits'; this change is being accompanied
by a shift to wholly online access – for those, who by definition, are likely to be
poor and less well educated. We are moving form the era of self determined
computation to one that is societally determined.
The social problems with this are clear from the preceding section. As a
discipline HCI may likewise need to shift as we move from a decade that that was
based on free choice and therefore focused on users as consumers, to one where
there is a little choice, and users are citizens.
For computers the critical shift was not so much utopian as economic. The
earliest end users spent their time feeding the computer, largely because the
computer was expensive and they were cheap. However, as we discussed
earlier, as computers became cheaper and in higher volume, there came a point
when it as worthwhile making them serve people and HCI was born.
Within HCI, the issue of function allocation, which jobs belong with the computer
and which with the human, is constantly evolving as technology redefines the
boundaries of what is better done manually or automatically. In an aircraft
cockpit this boundary may shift dynamically depending on the pilot's workload;
visualisation techniques seek to exploit the power of computation to present
data in ways which exploit the visual pattern seeking abilities of humans; even
the humble word processor reflows text as the human writer composes the
words.
However, the lessons of history show that the utopian image of technological
development is rarely simple. The Luddites of the 19th century are now seen as
the epitome of backwardness, fighting the (inevitable) change to more efficient
and productive textile mills. However, examinations of the writings of the time
showed that for the mill owners automation was more about control than
efficiency, shifting a previously independent and self-employed industry into a
centralised one based on employment and coercive working hours [Th63].
It is very unclear where recent developments such as Uber fit into this picture:
enabling individuals to connect and increasing autonomy, or making them cogs
in a machine.
These issues are playing out within HCI and related areas, so we have the
potential for real impact. The area of human-computation is often about fun
games such as image matching, or minor task-related activities such as
reCaptcha codes [AM08]. Typically the humans engaged in these tasks have little
or no idea of how their small intellectual labours contribute to the overall goal of
the system (e.g. improving OCR) – a clever balance utilising the power of the
human intellect, or treating people as components? Large-scale systems such as
the way Google uses statistics on page popularity, or Amazon recommendations
are not commonly described as human computation, but effectively are just that,
and Web Science is sometime described as the study of 'social machines' [HB10].
In 1842 Ada Lovelace wrote of the Analytic Engine, "(it) has no pretensions
whatever to originate any thing. … Its province is to assist us in making available
what we are already acquainted with." [Lo43]. That is, she saw it, very much in
the same light as Englebert did, augmenting human intellect. In contrast, there is
The latter may seem somewhat theoretical, but in HCI we constantly face this
tension between technological determinism and human capabilities. A good
example of this was in the recent UK REF exercises, evaluating all UK university
research [RE14]. The computing sub-panel used an automatic algorithm to
normalise the different grading patterns of reviewers (some more generous than
others, some more central markers, some marking to extremes). This sounds
reasonable, except that in order to get the algorithm to work 'optimally' there
needed sufficient overlap between reviewers' paper allocations, and in order to
achieve that overlap reviewers 'spread' their expertise, reviewing works far from
their core areas [Di15]. That is, in order to 'optimise' the machine algorithm, the
role of human expertise was diminished and the whole human–computer system
compromised.
To some extent this is such an obvious socio-technical error, and yet this
happened in the context of some of the most eminent computer science
academics in the UK. The human–computer processes we find around us today
are often far more complex. As a discipline and a community in HCI, we need to
develop the tools and techniques to understand and design such systems, and
equally important be able to communicate this to others.
Some of the early work in HCI involved forms of mathematical modelling, not
least the Model Human Processor [CM86], often drawing in cognitive science
roots influenced by AI. These more reductionist models were challenged in the
early years by Winograd and Flores' "Understanding Computers and Cognition"
[WF86] and Suchman's "Plans and Situated Actions" [Su87], and led to a
widespread distrust of more formal methods in HCI ever since.
Despite this there has been a small but active community in formal methods for
HCI, initially focused strongly around the York group in the late 1980s and early
1990s, and continuing since in a number of specialists conferences, which
eventually merged to become ACM EICS. There have been a number of collected
volumes over the years [TH90, PP97] and a 'state of the art' Springer volume is
imminent [WP17].
Another strand of HCI work, which has had a similar arc to formal methods, is
the engineering of user interfaces including tools, toolkits and architectures.
This was important in the early days of HCI notably the development of the
Seeheim model [PH85], MVC [KP88] and PAC [Co87]. This has continued to have
a core community represented in IFIP WG2.7 and ACM EICS, but also periodic
more widespread work as new kinds of technologies emerge and architectures
needed, for example, work in the early-2000s on event architectures for ubicomp
(e.g. Elvin [LR00] and ECT [GI04].
Visualisation
Visualisation was another core area for Stefano, and one of ongoing importance
as data continues to multiply. Indeed there is now more data created in the
world every second than there was in a whole year in the early 1990s [Wa15].
Although there had been work in scientific visualisation and graphics before HCI
existed, it was in the early 1990s that the speed of graphics terminals made
interactive visualisation possible and spurred a period of innovation not seen
since (for example, Cone Trees [RM91], TreeMaps [Sh92], Pixel Plotting [KH02],
Starfield [AS94] and Shneiderman's visualisation mantra [SP10]). However, the
sheer volume of data has led to new challenges over recent years, in particular
the rise of visual analytics combining visualisation and various forms of
automated analysis such as data mining [TC05, KK11]. It is likely that the Big
Data agenda will continue to push research in this area for some time to come
giving rise to interesting and important user interface challenges [DP11].
While some data is proprietary there has been a huge growth of Open Data
especially government data, offering the potential for third parties to interrogate
data and potentially use it to challenge policies and engage in democratic debate.
This has enabled a new media area of data journalism [GC12], for example, the
Guardian datablog [Gu16]. However, as with end-user programming, the ability
to harness this data is far from universal. Those that are most easily able to
afford the skills and processing power to benefit from open data are often those
who are already most powerful. There is a real challenge for HCI to make large-
scale data visualisation and analysis usable by small-scale communities and
interest groups [Di14].
Halevy et al. based on experience in many areas at Google, have written about
the "unreasonable effectiveness" of big data. In particular, areas that were once
the purview of symbolic AI techniques, such as natural language processing,
being tackled by large-scale statistical and machine learning algorithms [HN09].
Many years ago, in the early days of the use of non-symbolic AI such as neural
networks, I wrote about some of the challenges these raise for the transparency
and accountability of computer systems [Dx92], including examples of then
potential for implicit sexual and racial discrimination. Although it has been a
long time coming, these very issues have come to the fore with complaints that
Google image search produces gender-biased results or the Microsoft chat-bot
that learnt (from humans on Twitter) to use racist language [Hu16].
One of the core developments in early HCI was the rise of the graphical user
interface GUI) and direct manipulation [Sc83]. The notion of 'directness' is
critical here (and explored in depth in several of the chapters in User Centred
System Design [ND86]). In command line interfaces, the interaction was
mediated: digital resources (files, words, numbers, shapes) were effectively seen
as under the control of the computer and users asked the computer to perform
actions on them. In GUIs, users directly acted on the objects themselves, what
Draper described as a display (effectively 'digital') medium [Dr86]. Effectively
Much of the earliest computation was about automation, indeed the Commodore
PET, the first true personal computer, was still to be seen in factories and
controlling equipment long after it had been retired from the desktop. However,
as noted when discussing the role of ergonomics, the shrinking and
commodifying of the personal computer meant that for many years HCI focused
largely on digital interactions (even if emulating physical ones).
This arc in HCI to virtual physical interaction has changed in more recent years
in two ways.
These two trends to some extent meet (see table 1), in emerging technology of
autonomous vehicles, both domestic (the controversial Google car) and military
(even more controversial autonomous weaponry) [HM15]. Slightly less
controversially, they also come together in the areas of human–robot interaction
and social robotics [BM10].
passive autonomous
early cybernetics Google cars
physical personal devices autonomous weaponry
digital appliances social robotics
New Paradigms
We have seen some of the trends and threads that characterise changes in the
discipline and this has naturally surfaced a number of key challenges and areas
that are either currently significant or need to be so. Not least is the way in
which computation has ceased to be an optional part of particular aspects of life
for certain people, but is becoming an unavoidable aspect underlying all aspects
of life for everybody. HCI is becoming as important and all pervading, and
perhaps as difficult, as nutrition.
However, in both health and education we are starting to see systems growing
together allowing big data techniques to be applied to determine trends and then
feed these back to give individual advice. For example, learning analytics have
been used in higher education to predict likelihood that students will fail and
then offer appropriate advice [AP12].
This raises specific interface issues, for example, I have considered how best to
notify academics and enable them to act on such data [DL15], but it also requires
much more 'big picture' whole-systems thinking as all your education data, or all
your medical data is being gathered, often imperceptibly, and integrated by
different systems, some governmental, some commercial. For example, when I
walked around Wales I ended up with 60 days worth of ECG, EDA and other
health related data, currently available as open data [DE15]. This is unusual in
terms of its pervasive nature, but was limited to a short period. Fitness devices
and apps mean that many are beginning to share intimate and personal data
without clear understanding of the implications.
Whereas in health and education, the user issues stretch out from the direct
interface into continual and intimate monitoring of life, many of these socially
pervasive applications are focused on very specific kinds of activity: paying taxes,
hailing a taxi, booking accommodation. Here the stretching is about the way
This is not to say that the direct interface is not significant: Uber's simplicity has
been a key aspect of its growth. It is more that this individual interaction spills
into social and political changes that are larger than each individual transaction.
Much of the controversy around Uber has been the apparent deliberate attempts
to accelerate this process in its area, using massive investment to undercut
alternatives, but then exploit this posiion [Ro15,Ta15], as was seen in the Sydney
hostage crisis [Ba14].
Truly invisible
When Weiser introduced ubiquitous computing, he said, "The most profound
technologies are those that disappear " [We91]; indeed, when the European
Commission had a research strand on ubiquitous computing and when Norman
write about the issue, they both used the phrase "invisible computer" [No98].
However, Weiser's article is all about displays, small ones (inch scale), medium
ones (foot scale) and large ones (yard scale). The 'disappearing' was not about
the technology becoming physically invisible, but becoming unnoticed, like a
carpet or wallpaper, there but simply part of the background.
However, we are now finding many interfaces literally invisible. Voice interfaces
such as Siri or Cortona allow interaction without seeing a screen, and there is
substantial work on 'natural user interfaces' often using Kinnect or other non-
contact sensors to enable device-less interactions via gestures. It is even
possible to use ultrasound to create the feel of objects in mid-air, contactless
tactile interactions [CS13].
Even more problematic, as the size of devices reduces and the number of devices
proliferate, it is not so much that we are interacting with a single invisible
computer, but a more amorphous computational substance permeating the
environment. To date, the situation is less extreme than this, but the time is not
far off. We urgently need new ways to conceptualise and design for these vast
device ensembles, to understand and control emergent behaviours and make
sense of the unseen.
Locus of control
One of the problems with invisible computation is potential loss of control. Many
of the key user interface design principles are about ensuring that the user is in
control: visibility of system state, knowing what it is possible to do, having
effective and timely feedback of actions (see fig 4 and 5). This importance of
control was also evident in the early hypertext communities concern that users
may get "lost in hyperspace" [Co87].
However, it is not clear whether this concern is still universally valid. The term
"lost in hyperspace" is rarely heard now-a-days, not because users have a greater
sense of where they are in complex web-based interfaces, but, apparently,
because they do not care, at least for web-based information – if you want to find
the information again, there is always Google. In contrast, for desktop PIM only a
small percentage of users rely on desktop search, the majority preferring to
navigate file hierarchies, despite many users' difficulties managing them.
Whereas the shift to direct manipulation in the 1980s was all about users
controlling the interface, it is almost as if the user is being manipulated or
coerced, acting at the whim of the machine. The ramifications of this potentially
spread beyond the interface itself – if our systems constantly train people what
to do and when to do it, is this ultimately good for an informed citizenry and
democracy?
Looking back at the design of the interface itself. There is clearly a mismatch
between our user interface design principles and the reality in many systems
today. This could be because the systems are badly designed, or it could be
because the principles are out-dated, prepared in the days of productivity
software not social media. Probably the truth is somewhere between.
Digital fabrication
Even five years ago, laser cutters and digital printers were high-end industrial
machines. By Christmas of 2015, low-end 3D printers were in newspaper
magazine's 'what to buy your spouse' lists.
For the professional designer this offers the potential for rapid prototyping of
the physical form alongside the interactions of hybrid digital–physical goods;
experiments have shown subtle effects on interaction depending on the level of
physical fidelity of prototypes [GL08]. HCI researchers have also begun to
explore novel interactions involving digital fabrication including forms of direct
manipulation during the creation of objects [MK13, WA15].
Mass customisation near the point of use would open complex business, legal,
and health and safety issues; for example, who is responsible if a customised
microwave catches fire? From an interaction design perspective, we would need
ways to ensure that highly customised control panels are still usable, whether
through automatic tools to assess end-user designs, or maybe the HCI equivalent
of popular fashion or house redecoration television programmes.
Phoebe Sengers found that spending time on an island community enabled her to
re-evaluate the nature and, critically, the pace of IT [Se11], and one of the aims of
the Tiree Tech Wave workshop series I organise is to help researchers and
makers reflect on their work in a physically and intellectually open environment
[DD11]. However, it is not always possible or desirable to travel to an island in
order to escape constant digital intrusions.
Basic HCI
Finally, after looking at the emerging trends and paradigms, it is wise to look
back to our beginnings.
Apple products are often seen as being a touchstone of good usability design.
However, if you turn on an iPhone the unlock slider appears up to a minute
before it is possible to actually swipe it. Similarly, when you open a MacOS
laptop, the password entry box appears long before you can type. In iTunes
there are scrolling panes within a scrolling window, where the inner scrolling
panes are larger than the outer window so that you need to scroll the outer
window to navigate the scrollbar of the inner window. Recently I had a several
hundred files selected in the downloads folder ready to move them to an archive,
but accidentally double clicked causing them to simultaneously open, and lock
For Apple this is not a recent problem and for some years the focus on surface
aesthetics has overridden core usability. Even Don Norman and Bruce
Tognazzini have written bemoaning the demise of Apple usability [NT15].
Clearly there are examples of good usability practice, for example, the team
developing the touch keyboard for Windows 8 documented a rich process of
experiments and user observations [Si12]. However, it seems that as a discipline
we do need to constantly reiterate the lessons of the past as well as look towards
the new things of the future.
References
[AS94] Ahlberg, C., Shneiderman, B.: Visual information seeking: tight coupling of
dynamic query filters with starfield displays. Proceedings of the SIGCHI
conference on Human factors in computing systems: celebrating
interdependence. pp. 313–317. ACM, New York, NY, USA (1994).
[AM08] von Ahn, L., Maurer, B., McMillen, C., Abraham, D. and M. Blum, M.
(2008). reCAPTCHA: Human-Based Character Recognition via Web Security
Measures. Science, 321(5895):1465–1468
[AP12] Arnold, K. and Pistilli, M. 2012. Course signals at Purdue: using learning
analytics to increase student success. In Proc. of the 2nd International
Conference on Learning Analytics and Knowledge (LAK '12). ACM, NY, USA, 267-
270. doi:10.1145/2330601.2330666
[Ba14] Naina Bajekal (2014). Uber Charged 4 Times Its Usual Rate During
Sydney Hostage Siege. Time, Dec. 15, 2014
[BS04] Richard Boardman and M. Angela Sasse. 2004. "Stuff goes into the
computer and doesn't come out": a cross-tool study of personal information
management. In Proceedings of the SIGCHI Conference on Human Factors in
Computing Systems (CHI '04). ACM, New York, NY, USA, 583-590. DOI:
10.1145/985692.985766
[Bo14] Zachary Davies Boren (2014). There are officially more mobile devices
than people in the world. The Independent, Tuesday 7 October 2014.
http://www.independent.co.uk/life-style/gadgets-and-tech/news/there-are-
officially-more-mobile-devices-than-people-in-the-world-9780518.html
[Br14] Kristen V. Brown (2014). S.F. startup Telerivet's SMS could help
developing nations. SFGate, San Francisco Chronicle, July 15, 2014.
http://www.sfgate.com/technology/article/S-F-startup-Telerivet-s-SMS-could-
help-5622382.php
[Ca91] William Calvin (1991). The Ascent of Mind: Ice Age Climates and the
Evolution of Intelligence. Bantam Books.
[CS13] Tom Carter, Sue Ann Seah, Benjamin Long, Bruce Drinkwater, Sriram
Subramanian, Ultrahaptics: Multi-Point Mid-Air Haptic Feedback for Touch
Surfaces. UIST'13. October 2013.
[CF09] Chandler, A., Finney, J., Lewis, C. and Dix, A (2009). Toward emergent
technology for blended public displays. Ubicomp '09: Proceedings of the 11th
international conference on Ubiquitous computing. New York, NY, USA: ACM, p.
101-104 4 p.
[DF06] Dearden, Andy and Finlay, J. (2006). Pattern languages in HCI: a critical
review. Human computer interaction, 21 (1), 49-102.
[DS04] Diaper, D., & Stanton, N. (Eds.) (2004). The Handbook of Task Analysis
for Human-Computer Interaction. Lawrence Erlbaum Associates.
[Dx92] A. Dix (1992). Human issues in the use of pattern recognition techniques.
In Neural Networks and Pattern Recognition in Human Computer Interaction
Eds. R. Beale and J. Finlay. Ellis Horwood. 429-451.
http://alandix.com/academic/papers/neuro92/neuro92.html
[DS10] A. Dix and C. Sas (2010) Mobile Personal Devices meet Situated Public
Displays: Synergies and Opportunities. International Journal of Ubiquitous
Computing (IJUC), 1(1), pp. 11-28. http://www.hcibook.com/alan/papers/MPD-
SPD-2010/
[Dx11] Alan Dix. 2011. A shifting boundary: the dynamics of internal cognition
and the web as external representation. In Proceedings of the 3rd International
Web Science Conference (WebSci '11). ACM, New York, NY, USA, , Article 9 , 8
pages. DOI: 10.1145/2527031.2527056
http://alandix.com/academic/papers/websci2011-int-ext-cog/
[DP11] Alan Dix, Margit Pohl and Geoffrey Ellis (2011). Chapter 7: Perception
and Cognitive Aspects. In Mastering the Information Age Solving Problems with
Visual Analytics, Daniel Keim, Jörn Kohlhammer, Geoffrey Ellis and Florian
Mansmann (eds). Eurographics Association, pp. 109–130.
http://www.vismaster.eu/book/
[Dx13] A. Dix (2013). The Walk: exploring the technical and social margins.
Keynote APCHI 2013 / India HCI 2013, Bangalore India, 27th September 2013.
http://alandix.com/academic/talks/APCHI-2013/
[Di14] Dix, A. (2014). Open Data Islands and Communities. available at:
http://tireetechwave.org/projects/open-data-islands-and-communities/
[DL15] A. Dix and J. Leavesley (2015). Learning Analytics for the Academic: An
Action Perspective. In Journal of Universal Computer Science (JUCS), 21(1):48-65.
[DE15] Dix, A., & Ellis, G. (2015). The Alan walks Wales dataset: Quantified Self
and Open Data. In J. Atenas & L. Havemann (Eds.), Open Data As Open
Educational Resources: Case Studies of Emerging Practice. London: Open
Knowledge, Open Education Working Group. doi: 10.6084/m9.figshare.1590031
[DD11] Jakub Dostal and Alan Dix (2011). Tiree Tech Wave. Interfaces, Summer
2011, p.16-17. http://tireetechwave.org/events/ttw-1/interfaces-article/
[Dr10] Drewes, Heiko (2010): Eye Gaze Tracking for Human Computer
Interaction. Dissertation, LMU München: Fakultät für Mathematik, Informatik
und Statistik. https://edoc.ub.uni-muenchen.de/11591/
[Es13] Estyn (2013) The Impact of ICT on Pupils’ Learning in Primary Schools –
July 2013. Cardiff: Estyn, Her Majesty’s Chief Inspector of Education and
Training in Wales. http://www.estyn.gov.wales/thematic-reports/impact-ict-
pupils-learning-primary-schools-july-2013
[FA13] Michael Fourman, Alan Alexander, et al. (2013) Spreading the Benefits
of Digital Participation: An Interim Report for Consultation. Edinburgh: Royal
Society of Edinburgh, pp. 22–24.
[GI04] Greenhalgh, C.M., Izadi, S., Mathrick, J., Humble, J. and Taylor, I. (2004).
ECT - A Toolkit to Support Rapid Construction on Ubicomp Environments In:
System Support for Ubiquitous Computing Workshop - UbiSys04.
[HM15] Hawking, S., Musk, E., Wozniak, S., et al. (2015). Autonomous Weapons:
an Open Letter from AI & Robotics Researchers. Future of Life Institute.
http://futureoflife.org/AI/open_letter_autonomous_weapons
[HB10] Hendler, J., & Berners-Lee, T. (2010). From the Semantic Web to social
machines: A research challenge for AI on the World Wide Web. Artificial
Intelligence, 174(2), 156–161. doi:10.1016/j.artint.2009.11.010
[HM12] Hooper, C.J., Marie, N., Kalampokis, E. 2012. Dissecting the Butterfly:
representation of disciplines publishing at the Web Science conference series. In
Proceedings of the 3rd Annual ACM Web Science Conference (WebSci '12). ACM,
New York, NY, USA, 137-140.
[Hu16] E. Hunt (2016). Tay, Microsoft's AI chatbot, gets a crash course in racism
from Twitter. The Guardian. Thursday 24 March 2016.
https://www.theguardian.com/technology/2016/mar/24/tay-microsofts-ai-
chatbot-gets-a-crash-course-in-racism-from-twitter
[IE08] IEEE (2008). Special Report: The Singularity. Spectrum, IEEE, 45(10).
http://spectrum.ieee.org/static/singularity
[IH97] Ishii, H. and Ullmer, B. 1997. Tangible bits: towards seamless interfaces
between people, bits and atoms. In Proceedings of the ACM SIGCHI Conference
on Human factors in computing systems (CHI '97). ACM, New York, NY, USA, 234-
241. DOI: 10.1145/258549.258715
[JR89] Jeff Johnson, Teresa L. Roberts, William Verplank, David C. Smith,. Charles
H. Irby, Marian Beard, and Kevin Mackey (1989). The Xerox Star: A Retrospective.
IEEE Computer, 22(9):11–26. doi:10.1109/2.35211
[Jo07] William Jones (2007). Keeping Found Things Found: The Study and
Practice of Personal Information Management. Morgan Kaufmann
[KH02] Keim, D.A., Hao, M.C., Dayal, U., Hsu, M.: Pixel Bar Charts: A Visualization
Technique for Very Large Multi-Attribute Data Sets. Information Visualization
Journal. 1, (2002).
[KK11] Daniel Keim, Jörn Kohlhammer, Geoffrey Ellis and Florian Mansmann
(eds). (2011). Mastering the Information Age Solving Problems with Visual
Analytics, Eurographics Association, ISBN 978-3-905673-77-7.
http://www.vismaster.eu/book/
[KD10] Haliyana Khalid and Alan Dix. 2010. The experience of photologging:
global mechanisms and local interactions. Personal Ubiquitous Comput. 14, 3
(April 2010), 209-226. Doi: 10.1007/s00779-009-0261-4
[LG14] Liu, Y., Goncalves, J., Ferreira, D., Xiao, B., Hosio, S., & Kostakos, V. (2014).
CHI 1994-2013: Mapping two decades of intellectual progress through co-word
analysis. Proc. Conference on Human Factors in Computing Systems (CHI),
Toronto, Canada, 3553-3562.
[MD14] A. Morgan, A. Dix, M. Phillips and C. House (2014). Blue sky thinking
meets green field usability: can mobile internet software engineering bridge the
rural divide? Local Economy, September–November 2014. 29(6–7):750–761.
(Published online August 21, 2014). doi: 10.1177/0269094214548399
[Mo10] Jim Motavalli (2010). The Dozens of Computers That Make Modern Cars
Go (and Stop). The New York Times, Feb. 4, 2010.
http://www.nytimes.com/2010/02/05/technology/05electronics.html
[MK93] Michael J. Muller and Sarah Kuhn. 1993. Participatory design. Commun.
ACM 36, 6 (June 1993), 24-28. DOI=http://dx.doi.org/10.1145/153571.255960
[ND86] Donald A. Norman and Stephen W. Draper. 1986. User Centered System
Design; New Perspectives on Human-Computer Interaction. L. Erlbaum Assoc.
Inc., Hillsdale, NJ, USA.
[No88] D. Norman, The Design of Everyday Things. New York: Basic Book, 1988.
[NT15] Don Norman and Bruce Tognazzini (2105). How Apple Is Giving Design
A Bad Name. Fast Company, November 10, 2015 (accessed 19/1/2016)
[PV15] Pinder, C., Vermeulen, J., Beale, R., and Hendley, R. Exploring
Nonconscious Behaviour Change Interventions on Mobile Devices. MobileHCI'15
Adjunct, ACM Press (2015).
[Po81] J. B. Postel (Ed.), Internet Protocol, IETF RFC 791, Sept. 1981.
https://tools.ietf.org/html/rfc791
[RM91] Robertson, G.G., Mackinlay, J.D., Card, S.K.: Cone Trees: animated 3D
visualizations of hierarchical information. Proc. CHI’91. pp. 189–194. ACM Press,
New Orleans, USA (1991).
[Ro15] Brishen Rogers (2015). The Social Costs of Uber. Online Symposium
Grassroots Innovation & Regulatory Adaptation, The University of Chicago Law
Review Dialogue, (accssed
19/1/2016)https://lawreview.uchicago.edu/page/dialogue
[RM13] Rogers, Yvonne and Gary Marsden (2013) Does he take Sugar? Moving
Beyond the Rhetoric of Compassion. Interactions 20(4):48-57.
[SP10] Shneiderman, B. and Plaisant, C., Designing the User Interface: Strategies
for Effective Human-Computer Interaction: Fifth Edition, Addison-Wesley Publ.
Co., Reading, MA (2010)
[Si12] Steven Sinofsky (2102). Designing the Windows 8 touch keyboard. MSDN
blog, Microsoft, July 17, 2012. (accessed 19/1/2016)
https://blogs.msdn.microsoft.com/b8/2012/07/17/designing-the-windows-8-
touch-keyboard/
[SK82] David Canfield Smith, Charles Irby, Ralph Kimball, Bill Verplank and Eric
Harslem (1982). Designing the Star User Interface. BYTE, April 1982, pp. 242–
282
[Ta15] Erica Taschler (2015). A Crumbling Monopoly: The Rise of Uber and the
Taxi Industry’s Struggle to Survive. Institute for Consumer Antitrust Studies, News
and Views, June 2015. Accessed 19/1/2016,
http://www.luc.edu/law/centers/antitrust/publications/news_views/
[TQ09] Lucia Terrenghi, Aaron Quigley, and Alan Dix. 2009. A taxonomy for and
analysis of multi-person-display ecosystems. Personal Ubiquitous Comput. 13, 8
(November 2009), 583-598. DOI=http://dx.doi.org/10.1007/s00779-009-0244-
5
[TH15] Thanassis Tiropanis, Wendy Hall, Jon Crowcroft, Noshir Contractor,
Leandros Tassiulas (2015). Network Science, Web Science, and Internet Science.
Communications of the ACM, Vol. 58 No. 8, Pages 76-82 doi:10.1145/2699416
[TM15] Turchi, T., Malizia, A. and Dix, A. (2015). Fostering the Adoption of
Pervasive Displays in Public spaces using Tangible End-User Programming. IEEE
Symposium on Visual Languages and Human-Centric Computing, At Atlanta,
Georgia, US. 18-22 Oct 2015
[Wa15] Ben Walker (2015) Every Day Big Data Statistics – 2.5 Quintillion Bytes
Of Data Created Daily. Virtualization and Cloud News
http://www.vcloudnews.com/every-day-big-data-statistics-2-5-quintillion-
bytes-of-data-created-daily/
[We16] Webster, B. (2016). People don't need any more furniture, says, er, Ikea.
The Times, 18th Jan. 2016.
[WA15] Weichel, C., Alexander, J., Karnik, A. & Gellersen, H. (2015). Connected
tools in digital design. IEEE Pervasive Computing. 14, 2, p. 18-21 4 p.
[WD97] Wood, A., Dey, A., Abowd, G.: Cyberdesk: Automated Integration of
Desktop and Network Services. In: Proc. of the Conference on Human Factors in
Computing Systems (CHI 97), pp. 552–553, ACM Press (1997)