Human Computer Interaction Summer 2017 Lecture Book
Human Computer Interaction Summer 2017 Lecture Book
1 Introduction to HCI
Compiled by Shipra De, Summer 2017
Humans
>> Hello.
>> There are a few more as well like smelling and tasting, but we won't deal with those as much.
>> But Morgan has more than senses. She also has memories, experiences, skills, knowledge.
>> Thanks.
This is a computer. Or at least, this is probably what you think of when you think of the term computer.
But this is also a computer. And so is this. And so is this. And so is this. And so is this.
>> Hey!
>> Right, and so is this. And so is this, and this, and this, and this, and even this. And so is this. And so is
this. And so is this. [SOUND] And so is this. And so is this.
We have humans and we have computers and we're interested in the interaction between them. That
interaction can take different forms though. The most obvious seems to be the human interacts with
the computer and the computer interacts with the human in response. They go back and forth
interacting, and that's a valid view. But it perhaps misses the more interesting part of HCI.
We can also think of the human interacting with the task, through the computer.
Ideally in this case, we're interested in making the interface as invisible as possible, so the user can
spend as little time focusing on the interface and instead focus on the tasks that they're trying to
accomplish.
Realistically, our interfaces are likely to stay somewhat visible. But our goal is to let the user spend as
much time as possible thinking about the task, instead of thinking about our interface. We can all
probably remember times when we've interacted with a piece of software and we felt like we spent all
our time thinking about how to work the software. As opposed to accomplishing what we were using
We'll talk extensively about the idea of disappearing interfaces and designing with tasks in mind. But in
all likelihood, you've used computers enough to already have some experience in this area. So take a
moment and reflect on some of the tasks you do each day involving computers. Try to think of an
example where you spend most of your time thinking about the task and an example where you spend
most of your time thinking about the tool.
Video games actually give us some great examples of interfaces becoming invisible. A good video game
is really characterized by the player feeling like they're actually inside the game world, as opposed to
controlling it by pressing some buttons on a controller. We can do that through some intuitive
controller design like pressing forward moves forward, and pressing backward moves backwards. But a
lot of times we'll rely on the user to also learn how to control the game over time. But as they learn, it
becomes invisible between them and their interaction. A classic example of a place where interaction is
more visible, is the idea of having more than one remote control that controls what feels like the same
system. So I have these two controllers that control my TV and my cable box together. And for me it
One of the most exciting parts of HCI is it's incredible ubiquity. Computers are all around us and we
interact with them everyday. It's exciting to think about designing the types of tools and interfaces we
spend so much time dealing with, but there's a danger here too. Because we're all humans interacting
with computers, we think we're experts at human-computer interaction. But that's not the case.
We might be experts at interacting with computers, but that doesn't make us experts at designing
interactions between other humans and computers. We're like professional athletes or world-class
scientists. Just because we're experts doesn't mean we know how to help other people also become
experts.
Now, what we've described so far is a huge field, far too big to cover in any one class. In fact there are
lots of places where you can get an entire masters degree or PhD in human computer interaction. Here
are some of the schools that offer programs like that. And these are just the school that offer actual
degree programs in HCI, not computer science degrees with specializations in HCI, which would be
almost any computer science program. So let's look more closely at what we're interested in for the
purpose of the next several weeks. To do that, let's look at where HCI sits in a broader hierarchy of
fields.
We can think of HCI as a subset of a broader field of human factors engineering. Human factors
engineering is interested in a lot of the same ideas that we're interested in. But they aren't just
interested in computers. Then there are also sub disciplines within HCI. This is just one way to
represent this. Some people, for example, would put UI design under UX design, or put UX design on
the same level as HCI, but this is the way I choose to present it. Generally, these use many of the same
principles that we use in HCI, but they might apply them to a more narrow domain, or they might have
their own principles and methods that they use in addition to what we talk about in HCI in general. So
to get a feel for what we're talking about when we discuss HCI, let's compare it to these other different
fields.
First let's start by comparing HCI to the broader field of human factors. Human factors is interested in
designing interactions between people and products, systems or devices. That should sound familiar.
We're interested in designing the interactions between people and computers. But computers are
themselves products or systems. But human factors is interested in non-computing parts of this, as well.
Let's take an example. I drive a pretty new electric car, which means there are tons of computers all
over it. From a HCI perspective, I might be interested in visualizing the data on the dashboard, or
helping the driver control the radio. Human factors is interested not only in how I interact with the
computerized parts of the car but the non-computerized parts as well. It's interested in things like the
height of the steering wheel, the size of the mirrors, or the positioning of the chair. It's interested in
designing the entire environment, not just the electronic elements. But that means it's interested in a
lot of the same human characteristics that we care about, like how people perceive the world, and
their own expectations about it. So many of the principles we'll discuss in this class, come from human
For many years, human-computer interaction was largely about user interface design. The earliest
innovations in HCI were the creation of things like the light pen, the first computer mouse, which allow
for flexible interaction with things on screen. But the focus was squarely on the screen.
And so, we developed many principles about how to design things nicely for a screen. We borrowed
from the magazine and print industries and identify the value of grids in displaying content and guiding
the users eyes around our interfaces.
We developed techniques for helping interfaces adapt to different screen sizes and we developed
methods for rapidly prototyping user interfaces using pen and paper or wire frames.
Through this rich history, UI design really became its own well defined field. In fact, many of the
concepts we'll cover in HCI were originally developed in the context of UI design. But in HCI, we're
interested in things that go beyond the user's interaction with a single screen. Technically, you an cover
that in UI design as well, but traditionally most of the UI design classes I see focus on on-screen
interaction. In HCI, we'll talk about the more general methods that apply to any interface.
The relationship between HCI and user experience design is a little bit closer. In fact, if you ask a dozen
people working in the field, you'll likely get a dozen different answers about the difference.
For the purposes of our conversations though, we'll think about the difference like this. HCI is largely
about understanding the interactions between humans and computers. User experience design is
about dictating the interactions between users and computers. In order to design user experiences
very well, you need to understand the user, you need to understand their interactions with interfaces.
And that's why I personally consider user experience design to be a subfield of the broader area of HCI.
In our conversations, we'll use the principles and methods from HCI to inform how we design user
experiences. But it's important to note that this relationship is deeply symbiotic. We might use that
understanding to inform how we design the user experiences. But then we evaluate those designs and
based on their success or failures, we'll use that to inform our increasing knowledge of human
computer interaction itself. If our understanding leads us to create good designs, that provides
evidence that our understanding is correct. If we create a design with some understanding and that
design doesn't work, then maybe our understanding was flawed and now our understanding of human
computer interaction as a whole will increase. This is similar to something called design based research,
which we'll talk about later, using the results of our designs to conduct research.
The research side of HCI connects to the relationship between HCI and psychology. And if we zoom out
even further on this hierarchy of disciplines, we might say that human factors engineering itself is in
many ways the merger of engineering and psychology. As well as other fields of design and cognitive
science. In HCI, the engineering side takes the form of software engineering, but this connection to
psychology remains, and in fact, it's symbiotic. We use our understanding of psychology, of human
perception, of cognition to inform the way we design interfaces. We then use our evaluations of those
interfaces to reflect on our understanding of psychology itself. In fact, at Georgia Tech, the Human
Computer Interaction class is cross listed as a Computer Science and Psychology class.
So let's take an example of this. In 1992, psychologists working at Apple wanted to study how people
organized the rapid flow of information in their work spaces.
Finally, they used the results of that development to reflect on how people were managing their work
spaces in the first place. So in the end, they had a better understanding of the thought processes of
their users as well as an interface that actually helped users. So in the end, their design of an interface
with an HCI informed their understanding of psychology more generally. We came away with a better
understanding of the way humans think about their work spaces because of our experience designing
something that was supposed to help them think about their work spaces.
Now that we've talked at length about what HCI isn't, let's talk a little bit about what HCI actually is. On
the one hand, HCI is about research. Many of the methods we'll discuss are about researching the user,
understanding their needs,and evaluating their interactions with designs that we've prototyped for
them. But on the other hand, HCI is about design. After all, design is that prototyping phase, even
though we're prototyping with research in mind. HCI is about designing interactions to help humans
interact with computers, oftentimes using some known principals for good interaction. Things like
designing with distributed cognition in mind, or making sure the user develops good mental models of
the way the interface works, or making sure they design with universal design in mind. We'll talk about
all these topics later in our conversations. You don't need to worry about understanding any of these
right now. What is important to understand right now, is that these aren't two isolated sides.
The results of our user research inform the designs we construct, and the results of our designs inform
our ongoing research.
Again, you might notice this is very similar to the feedback cycles we're designing for our users. They
use what they know to participate in the task, and then use the feedback from that participation to
So now you know what we're going to cover in our exploration of HCI. We're going to talk about some
of the fundamental design principles that HCI researchers have discovered over the years. We're going
to talk about performing user research, whether it be for new interfaces, or exploring human cognition.
We're going to talk about the relationship between these two, how our research informs what we
design, and how what we design helps us conduct research. And we're going to talk about how these
principles work in a lot of domains, from technologies like augmented reality, to disciplines like
healthcare. I hope you're excited. I know I am. I like to think of this not just as a course about human-
computer interaction, but also an example of human-computer interaction. Humans using computers
in new and engaging ways to teach about computer interaction. We hope this course exemplifies the
principles as well as teaches them.
Introduction to CS6750
[MUSIC] Now that you understand a little bit about what human-computer interaction is, let's talk
about what this class is going to be like. In this lesson, I am going to take you through a high level
overview of this class. What material we'll cover, how it fits together, and what you should expect to
know by the end of the course. I'll also talk a little bit about the assessments we'll use in the class, but
be aware, these assessments are only applicable to students taking this class through Georgia Tech. If
you're watching this course on your own or taking it to complement other courses you're taking, those
assessments won't apply to you, but you'll get to hear a little bit about what students in the actual
course do. If you are a student in the course, you should know the assessments do tend to change a bit
semester to semester. I'm going to try and stay as general as possible to capture future changes, but
make sure to pay attention to the specific materials you're provided for your semester.
In education, a learning goal is something we want you to understand at the end of the course. It's the
knowledge contained within your head that you might not have had when we got started. In this class
we have three major learning goals.
First, we want you to understand some of the common principles in human computer interaction.
These are the tried and true rules on how to design good interactions between humans and computers.
Third, we want you to understand the expense of the human computer inherent interaction field and
the current applications available for HCI. HCI is really everywhere, from domains like healthcare, to
technologies like virtual reality, to emerging techniques like sonification.
While learning goal is something we want you to know at the end of the course, a learning outcome is
something we want you to be able to do. This course really has one learning outcomes but there are
some nuances to it.
The learning outcome for this course is to be able to design effective interactions between humans and
computers. The first part of this learning outcome is to design. But what is design? Well for us design is
going to take two forms.
But design is a second form as well, design is also a process where you gather information, use it to
develop design alternatives, evaluate them with users and revise them accordingly. When designing
interface for some tasks, I would ask some potential users how they perform some task right now. I
develop multiple different ideas for how we can help them. I give those to the users to evaluate, and I
will use the experiences to try to improve the interface on that time.
We'll cover the principles uncovered by a human factors engineering and human computer interaction
research. And we'll cover the methods used in the HCI. We're gathering user requirements, developing
designs, and evaluating new interfaces.
The first part of this learning outcome to design needed some definition, but the second part seems
pretty straightforward, right? Not exactly. Effectiveness is defined in terms of our goal.
The most obvious goal here might be usability and for a lot of that's exactly what we're interested in. If
I'm designing a new thermostat, I want the user to be able to create the outcome they want as easily as
possible.
Or it could be that my goal isn't to make the certain activity easier but rather to change that activity.
Maybe I'm interested in reducing a home's carbon footprint. In that case, my goal is to get people to
use less electricity. I might design the interface of the thermostat specifically to encourage people to
use less. Maybe I'd show them a comparison to their neighbor's usage, or allow them to set energy
usage goals. Or I could make the thermostat physically harder to turn up and down. So effectiveness is
very much determined by the goal that I have in mind. We'll generally assume that our goal is usability,
unless we state otherwise. But we'll definitely talk about some of those other goals as well.
The final part of our desired learning outcome is between humans and computers. We want to design
effective interactions between humans and computers. Well, what is important to note here, is where
we're placing the emphasis. Note that we didn't say designing effective interfaces, because that puts
the entire focus on the interface. We're deeply interested in the human's role in this interaction.
So rather than designing interfaces, designing programs, designing tools, we're designing interactions.
We're designing tasks. We're designing how people accomplish their goals, not just the interface that
they use to accomplish their goals.
But if you set to design a better way for people to control the temperature in their home, you might
end up with Nest. A device that learns from the user and starts to control the temperature
automatically.
Learning strategies are how we plan to actually impart that knowledge to you. This is how we attempt
to help you achieve the learning goals and learning outcomes. Within these videos, we'll use a number
of different strategies to try to help you understand the core principles and methodologies of HCI.
We'll use learning by example. Every lesson and, in fact, this course, as a whole. Is organized around a
collection of running examples that will come up over and over again.
We'll ask you to reflect on times when you've encountered these things in your own everyday life.
These strategies are useful because they connect to your own personal experiences but once again,
there's a danger here.
Within the full course at Georgia Tech, there are a number of other strategies in which you'll engage as
well. First, we're really passionate about leveraging the student community in this class to improve the
experience for everyone. Taking this class with you are people with experience in a variety of
industries, many of whom have significant experience in HCI. So some strategies we'll use include peer
learning, collaborative learning, learning by teaching, and communities of practice. You'll learn both
from each other and with each other. You'll play the role of student, teacher, and partner, and you will
learn from each perspective.
In addition, the entire course is built around the idea of project-based learning. Early in the semester,
you'll form a team and start looking at a problem we've selected, or maybe one in which you're already
interested. This project will then become the domain through which you explore the principles and
methods of human-computer interaction. Who knows? By the end of the semester, you might even
generate something with the potential to go forward as a real-world product, or as a research project.
Learning goals are what we want you to understand. Learning outcomes or what we want you to be
able to do. Learning assessments then, are how we evaluate whether you can do what we want you to
be able to do and understand what we want you to understand. The learning outcome to this class is to
be able to design effective interactions between humans and computers.
So the primary assessments in this class are to, say it with me, design effective interactions between
humans and computers. You'll start with some relatively small scale tasks, recommending
improvements to existing interfaces or undertaking some small design challenges. But as the semester
goes on, you'll scope up towards a bigger challenge. You'll initially investigate that challenge individually
and then you'll merge into teams to prototype and evaluate a full solution to the challenge you chose.
We'll close by talking about the overall structure of the content you'll be consuming. The course's
lessons are designed to be as independent as possible, so you should be able to skip around if you want,
but there's a certain logic to our planned presentation order. We discussed earlier the model HCI, how
design informs research and research then informs design, so we'll start by discussing some of the core
design principles of HCI.
Then we'll discuss the research methodologies for uncovering new user information, the iterative
design lifecycle. We'll close by giving you the opportunity to peek at what's going on in the HCI
community at large.
Here are five quick tips for doing well in this course. [SOUND] Number one. Look over the assignments
early. Some of our assignments you can sit down and do them in an hour. But others require some
advance coordination to talk to users, develop prototypes or test with real people. So go ahead and at
least read all the assignment descriptions. Number 2. Start the assignments early. That's not just
typical teacher talk saying, you can't get this done at the last minute. You probably can, but you're
using interfaces like these in your everyday life. By starting early you're likely to get inspiration just in
your day to day routine, and that's going to make writing the assignment significantly easier than trying
to sit down and come up with something on the spot. Number three, participate, interact with your
classmates, post on the forums, read others' posts. The knowledge and experience you gain there is just
as valuable as anything you'll get listening to me in these videos. Number four, select an application
area to explore. Next lesson you'll hear about several of the interesting areas of HCR research and
development going on right now. Developing in many of these areas is outside the scope of this class,
but I encourage you to pick an area in which you're interested and mentally revisit it throughout the
course. Number five, leave behind what you know, or at least try. HCI is a huge area and yet many
people believe that because they're already good at using computers, they'd be good at designing user
experiences. But HCI above all else is about gaining a grounded understanding of the user's needs not
assuming we already understand them. So while it's great to apply the course's principles to your every
day life, be cautious about designing too narrowly based only on your own experiences.
In this lesson, I've tried to give you some expectations of what this course will be like. We've gone over
the course's goals, outcomes, learning strategies and assessments.
We've covered the course's learning outcome in great detail, to design effective interactions between
humans and computers. Now I focus mostly on the video material, because the assignments, projects
and exams are separate from these videos, and are likely to change pretty significantly semester to
semester.
[MUSIC] [SOUND] Computers are finding their way into more and more devices, and as a result HCI is
becoming more and more ubiquitous. It used to be that you wouldn't really need to think too much
about HCI when designing a car or designing a refrigerator, but more and more computing is pervading
everything. At the same time, new technological developments are opening up new areas for
exploration. We're seeing a lot of really fascinating progress in areas like virtual reality, augmented
reality, wearable devices. As we study HCI, we're going to talk a lot about things you've already used
like computers and phones. But we want you to keep in mind some of these more cutting edge
application areas as well. After all, if you're really interested in going into HCI professionally, you'll be
designing for these new application areas. So we're going to quickly preview some of these. We'll
divide them into three areas, technologies, domains and ideas. Technologies are emerging
technological capabilities that let us create new and interesting user interactions. Domains are pre-
existing areas that could be significantly disrupted by computer interfaces like healthcare and
education. Ideas span both of these. They are the theories about the way people interact with
interfaces and the world around them. Now, our delineation of this is kind of artificial. There's a lot of
overlap. New technologies like augmented reality are what allow emerging ideas like contact sensitive
computing to really have the power that they do. But for organization, we'll group our application areas
into these three categories. When one of these areas catches your eye, take a little while and delve
into it a little bit deeper. Then keep that topic area in mind as you go through the rest of the HCI
material. We'll revisit your chosen area throughout the course, and ask you to reflect on the
application of the course's principals and methods to your application area.
The year that I'm recording this is what many have described as the year that virtual reality finally hits
the mainstream. By the time you watch this, you'll probably be able to assess whether or not that was
true, so come back in time and let me know. Virtual reality is an entire new classification of interaction
and visualization and we're definitely still at the beginning of figuring out what we can do with these
new tools. You could be one of the ones who figures out the best way to solve motion sickness or how
to get proper feedback on gestural interactions.
A lot of the press around virtual reality has been around video games, but that's definitely not the only
application. Tourism, commerce, art, education, virtual reality has applications to dozens of spaces. For
example, there is a lab in Michigan that's using virtual reality to treat phobias. They're creating a safe
space where people can very authentically and realistically confront their fears. The possible
applications of virtual reality are really staggering. So I'd encourage you to check them out as you go
through this class.
Virtual reality generally works by replacing the real world's visual, auditory, and sometimes even all
factory or kinesthetic stimuli with it's own input. Augmented reality on the other hand, compliments
what you see and hear in the real world. So for example, imagine a headset like a Google Glass that
automatically overlays directions right on your visual field. If you were driving, it would highlight the
route to take, instead of just popping up some visual reminder. The input it provides complements
stimuli coming from the real world, and instead of just replacing them. And that creates some
enormous challenges, but also some really incredible opportunities as well.
Imagine the devices that can integrate directly into our everyday lives, enhancing our reality. Imagine
systems that could, for example, automatically translate text or speech in a foreign language, or could
show your reviews for restaurants as you walk down the street. Imagine a system that students could
use while touring national parks or museums, that would automatically point out interesting
information, custom tailored to that student's own interests. The applications of augmented reality
could be truly stunning, but it relies on cameras to take input from the world, and that actually raises
Ubiquitous Computing [SOUND] refers to trend towards embedding computing power in more and
more everyday objects. You might also hear it referred to as pervasive computing, and it's deeply
related to the emerging idea of an Internet of Things. A few years ago, you wouldn't have found
computers in refrigerators and wristwatches, but as microprocessors became cheaper and as the world
became increasingly interconnected, computers are becoming more and more ubiquitous. Modern HCI
means thinking about whether someone might use a computer while they're driving a car or going on a
run. It means figuring out how to build smart devices that offloads some of the cognitive load from the
user, like refrigerators that track their own contents and deliver advice to the users right at the right
time.
This push for increasing pervasiveness has also lead to [SOUND] the rise of wearable technologies.
Exercise monitors are probably the most common examples of this, but smart watches, Google Glass,
augmented reality headsets, and even things like advanced hearing aids and robotic prosthetic limbs,
are all examples of wearable technology. This push carries us into areas usually reserved for human
A lot of the current focus on robotics is on their physical construction and abilities or on the artificial
intelligence that underlies their physical forms. But as robotics becomes more and more mainstream,
we're going to see the emergence of a new subfield of human-computer interaction, human-robot
interaction. The field actually already exists. The first conference on human robot interaction took
place in 2006 in Salt Lake City, and several similar conferences have been created since then. Now as
robots enter the mainstream, we're going to have to answer some interesting questions about how we
interact with them.
For example, how do we ensure that robots don't harm humans through faulty reasoning. How do we
integrate robots into our social lives, or do we even need to? As robots are capable of more and more,
how do we deal with the loss of demand for human work? Now these questions all lie at the
intersection of HCI, artificial intelligence and philosophy in general. But there are some more concrete
questions we can answer as well. How do we pragmatically equip robots with the ability to naturally
One of the biggest changes to computing over the past several years has been the incredible growth of
mobile as a computing platform. We really live in a mobile first world and that introduces some
significant design challenges.
Screen real estate is now far more limited, the input methods are less precise and the user is distracted.
But mobile computing also presents some really big opportunities for HCI. Thanks in large part to
mobile we're no longer interested just in a person sitting in front of a computer. With mobile phones,
most people have a computer with them at all times anyway. We can use that to support experiences
from navigation to star gazing. Mobile computing is deeply related to fields like context aware
computing, ubiquitous computing and augmented reality, as it possesses the hardware necessary to
compliment those efforts. But even on its own, mobile computing presents some fascinating challenges
to address.
What time is it? >> You can go ahead and go to lunch. >> Did that exchange make any sense? I asked
Amanda for the time and she replied by saying I can go ahead and go get lunch. The text seems
completely non-sensical and yet hearing that, you may have filled in the context that makes this
conversation logical. You might think that I asked a while ago what time we were breaking for lunch, or
maybe I mentioned that I forgot to eat breakfast. Amanda would have that context and she could use it
to understand why I'm probably asking for the time. Context is a fundamental part of the way humans
interact with other humans. Some lessons we'll talk about even suggest that we are completely
incapable of interacting without context.
If context is such a pervasive part of the way humans communicate, then to build good interfaces
between humans and computers, we must equip computers with some understanding of context.
As this course goes on, you'll find that I'm on camera more often than you're accustomed to seeing in a
Udacity course. Around half this course takes place with me on camera. There are a couple of reasons
for that. The big one is that this is Human Computer Interaction. So it makes sense to put strong
emphasis on the Human. But another big on is that when I'm on camera, I can express myself through
gestures instead of just word and voice intonations. I can for example make a fist and really drive home
and emphasize a point. I can explain that a topic applies to a very narrow portion of the field or a very
wide portion of the field. We communicate naturally with gestures every day. In fact, we even have an
entire language built out of gestures. So wouldn't it be great if our computers could interpret our
gestures as well?
That's the emerging field of Gesture-Based Interaction. You've seen this with things like the Microsoft
Connect which has far reaching applications from healthcare to gaming. We've started to see some
applications of gesture based interaction on the go as well with wrist bands that react to certain hand
motions. Gesture based interaction has enormous potential. The fingers have some of the finest muscle
I always find it interesting how certain technologies seem to come around full circle. For centuries we
only interacted directly with the things that we built and then computers came along. And suddenly we
needed interfaces between us and our tasks. Now, computers are trying to actively capture natural
ways we've always interacted. Almost every computer I encounter now days has a touch screen. That's
a powerful technique for creating simple user interfaces because it shortens the distance between the
user and the tasks they’re trying to accomplish. Think about someone using a mouse for the first time.
He might need to look back and forth from the screen to the mouse, to see how interacting down here,
change things he sees up here. With a touch based interface, he interacts the same way he uses things
in the real world around him. A challenge can sometimes be a lack of precision, but to make up for that
we've also create pen based interaction. Just like a person can use a pen on paper, they can also use a
pen on a touch screen. And in fact, you might be quite familiar with that, because most Udacity courses
use exactly that technology. They record someone writing on a screen. That gives us the precision
necessary to interact very delicately and specifically with our task. And as a result tablet based
interaction methods have been used in fields like art and music. Most comics you find on the internet
are actually drawn exactly like this, combining the precision of human fingers with the power of
computation.
One of the biggest trends of the information age is the incredible availability of data. Scientists and
researchers use data science and machine learning to look at lots of data and draw conclusions. But
often times those conclusions are only useful if we can turnaround and communicate them to ordinary
people. That's where information visualization comes in. Now at first glance you might not think of
data visualization as an example of HCI. After all, I could draw a data visualization on a napkin and print
in a newspaper and there's no computer involved anywhere in that process. But computers give us a
powerful way to re-represent data in complex, animated, and interactive ways. We'll put links to some
excellent examples in the notes. Now what's particularly notable about data visualization in HCI is the
degree with which it fits perfectly with our methodologies for designing good interfaces. One goal of a
good interface is to match the user's mental model to the reality of the task at hand. In the same way,
the goal of information visualization is to match the reader's mental model of the phenomenon to the
reality of it. So the same principles we discussed for designing good representations apply directly to
designing good visualizations. After all, a visualization is just a representation of data.
CSCW stands for Computer-Supported Cooperated Work. The field is just what the name says. How do
we use computers to support people working together. You're watching this course online. So odds are
that you've experienced this closely. Maybe you've worked on a group project with a geographically
distributed group. Maybe you've had a job working remotely. Distributed teams are one example of
CSCW in action but there are many others. The community often breaks things down into two
dimensions.
Time and place. We can think of design as whether or not we're designing for the users in the same
time and place or users at different times in different places. This course is an example of designing for
different time and different place. You're watching this long after I recorded this, likely from far away
from our studio. Work place chat utilities like slack and hipchat would be examples of same time,
different place. They allow people to communicate instantly across space, mimicking the real-time
Social computing is the portion of HCI that's interested in how computers affect the way we interact
and socialize. One thing that falls under this umbrella is the idea of recreating social norms within
computational systems. So for example, when you chat online, you might often use emojis or
emoticons. Those are virtual recreations of some of the tacit interaction we have with each other on a
day-to-day basis. So, for example, these all take on different meanings depending on the emotion
provided. Social computing is interested in a lot more than just emojis, of course.
From online gaming and Wikipedia, to social media, to dating websites, social computing is really
interested in all areas where computing intersects with our social lives.
One of the most exciting application areas for HCI is in helping people with special needs. Computing
can help us compensate for disabilities, injuries, aging. Think of a robotic prosthetic, for example. Of
course, part of that is engineering, part of it is neuroscience. But it's also important to understand how
the person intends to use such a limb in the tasks they need to perform. That's HCI intersecting with
robotics.
Or take another example from some work done here at Georgia Tech by Bruce Walker, how do you
communicate data to a blind person? We've talked about informational visualization, but if it's a
visualization, it's leaving out a significant portion of the population. So Dr. Walker's sonification lab
works on communicating data using sound. A lot of the emerging areas of HCI technology could have
extraordinary significance to people with special needs. Imagine virtual reality for people suffering
from some form of paralysis. Or imagine using artificial intelligence with context-aware computing to
Hi, and welcome to educational technology. My name is David Joyner and I'm thrilled to bring you this
course.
As you might guess, education is one of my favorite application areas of HCI. In fact, as I'm recording
this, I've been teaching educational technology at Georgia Tech for about a year, and a huge portion of
designing educational technology is really just straightforward HCI. But education puts some unique
twists on the HCI process. Most fascinatingly, education is an area where you might not always want to
make things as easy as possible. You might use HCI to introduce some desirable difficulties, some
learning experiences for students. But it's important to ensure that the cognitive loads students
experience during a learning task is based on the material itself. Not based on trying to figure out our
interfaces. The worst thing you can do in HCI for education is raise the student's cognitive load because
they're too busy thinking about your interface instead of the subject matter itself. Lots of very noble
A lot of current efforts in healthcare are about processing the massive quantities of data that are
recorded everyday. But in order to make that data useful, it has to connect to real people at some
point. Maybe it's equipping doctors with tools to more easily visually evaluate and compare different
diagnoses. Maybe it's giving patients the tools necessary to monitor their own health and treatment
options. Maybe that's information visualization so patients can understand how certain decisions affect
their well-being. Maybe it's context aware computing that can detect when patients are about to do
something they probably shouldn't do. There are also numerous applications of HCI to personal health
like Fitbit for exercise monitoring or MyFitnessPal for tracking your diet. Those interfaces succeed if
they're easily usable for users. Ideally, they'd be almost invisible. But perhaps the most fascinating
upcoming intersection of HCI and health care is in virtual reality. Virtual reality exercise programs are
already pretty common to make living an active lifestyle more fun, but what about virtual reality for
therapy? That's actually already happening. We can use virtual reality to help people confront fears and
anxieties in a safe, but highly authentic place. Healthcare in general is concerned with the health of
humans. And computers are pretty commonly used in modern healthcare. So the applications of
human computer interaction to healthcare are really huge.
Classes on network security are often most concerned with the algorithms and encryption methods that
must be safeguarded to ensure secure communications. But the most secure communication strategies
in the world are weakened if people just refuse to use them. And historically, we've found people have
very little patience for instances where security measures get in the way of them doing their tasks. For
security to be useful it has to be usable. If it isn't usable, people just won't use it. XEI can increase the
usability of security in a number of ways. For one, it can make those actions simply easier to perform.
CAPTCHAs are forms that are meant to ensure users are humans. And they used to involve recognizing
letters in complex images, but now they're often as simple as a check-box. The computer recognizes
human-like mouse movements and uses that to evaluate whether the user is a human. That makes it
much less frustrating to participate in that security activity. But HCI can also make security more usable
by visualizing and communicating the need. Many people get frustrated when systems require
passwords that meet certain standards or complexity, but that's because it seems arbitrary. If the
system instead expresses to the user the rationale behind the requirement, the requirement can be
much less frustrating. I've even seen a password form that treats password selection like a game where
you're ranked against others for how difficult your password would be to guess. That's a way to
incentivize strong password selection making security more usable.
Video games are one of the purest examples of HCI. They're actually a great place to study HCI, because
so many of the topics we discuss are so salient. For example, we discussed the need for logical mapping
between actions and effects. A good game exemplifies that. The actions that the user takes with the
controller should feel like they're actually interacting within the game world. We discussed the power
of feedback cycles. Video games are near constant feedback cycles as the user performs actions,
evaluates the results and adjust accordingly. In fact, if you read through video game reviews you'll find
that many of the criticisms are actually criticisms of bad HCI. The controls are tough to use, it's hard to
figure out what happened. The penalty for failure is too low or too high. All of these are examples of
poor interface design. In gaming though there's such a tight connection between the task and the
interface. Their frustrations with a task can help us quickly identify problems with the interface.
Throughout our conversations we are going to explore some of the fundamental principles and
methods of HCI. Depending on the curriculum surrounding this material, you will complete
assignments, projects, exams and other assessments in some of these design areas. However we'd also
like you to apply what you learn to an area of your choice. So pick an area, either one we've mentioned
here, or one you'd like to know about separately, and keep it in mind as we go through the course. Our
hope is that by the end of the course you'll be able to apply what you learn here to the area in which
you're interested in working.
In this lesson, our goal has been to give you an overview of the exciting expanse of ongoing HCI research
and development. We encourage you to select a topic you find interesting, read about it a little bit
more, and think about it as you go through the course. Then in unit four we'll provide some additional
readings and materials on many of these topics for you to peruse. And in fact you can feel free to jump
ahead to there now as well. But before we get too far into what we want to design, we first must cover
the fundamental principles and methods of HCI.
[MUSIC] For this portion of our conversation about human computer interaction, we're going to talk
about some established principals that we'd uncovered after decades of designing user interfaces. We
want to understand the fundamental building blocks of HCI, and separately we'll talk about how to build
on those foundations to do new research and new development. To get started, though, let's first
define some of the overarching ideas of design principles.
In this lesson, we're going to talk about the way we focus on users and tasks in HCI, not on tools and
interfaces on their own.
We're going to discuss different views on the user's role in the system.
At the heart of Human Computer Interaction is the idea that users use interfaces to accomplish some
task. In general, that interface wouldn't actually have to be technological. This cycle exists for things
like using pencils to write things or using a steering wheel to drive a car. But in HCI, we're going to
focus on interfaces that are in some way computational or computerized. What's most important here
though is our focus on the interaction between the user and the task though the interface, not just the
interaction between the user and the interface itself. We're designing interfaces, sure, but to design a
good interface, we need to understand both the users goals and the tasks they're trying to accomplish.
Understanding the task is really important. One of the mistakes many novice designers make, is
jumping too quickly to the interface, without understanding the task. For example, think about
designing a new thermostat. If you focus on the interface, the thermostat itself, you're going to focus
on things like the placement of the buttons or the layout of the screen, on whether or not the user can
actually read what's there, and things like that. And those are all important questions. But the task is
controlling the temperature in an area. When you think about the task rather than just the interface,
you think of things like nest, which is a device that tries to learn from its user and act autonomously.
That's more than just an interface for controlling whether the heat or the air conditioning is on. That's
an interface for controlling the temperature in your house. By focusing on the task instead of just the
Let's try identifying a task real quick. We're going to watch a short clip of Morgan. Watch what she
does, and try to identify what task she is performing. [MUSIC] What was the task in that clip?
If you said she's swiping her credit card, you're thinking a little too narrowly. Swiping her credit card is
just how she accomplishes her task. We're interested in something more like she's completing a
purchase. She's purchasing an item. She's exchanging goods. Those all put more emphasis on the
actual task she's accomplishing and let us think more generally about how we can make that interface
even better.
Here are five quick tips for identifying a user task. One: Watch real users. Instead just speculating or
brainstorming, get out there and watch real users performing in the area in which you’re interested.
Two: Talk to them! You don’t have to just watch them. Recruit some participants to come perform the
task and talk their way through it. Find out what they’re thinking, what their goals are, what their
motives are. Three: Start small. Start by looking at the individual little interactions. It’s tempting to
come in believing you already understand the task, but if you do, you’ll interpret everything you see in
terms of what you already believe. Instead, start by looking at the smallest operators the user performs.
Four: Abstract up. Working from those smaller observations, then try to abstract up to an
understanding of the task they’re trying to complete. Keep asking why they’re performing these actions
until you get beyond the scope of your design. For example: what is Morgan doing? Swiping a credit
card. Why? To make a purchase. Why? To acquire some goods. Why? To repair her car. Somewhere in
that sequence is likely the task for which we want to design. Five: You are not your user. Even if you
yourself perform the task for which you’re designing, you’re not designing for you: you’re designing for
everyone that performs the task. So, leave behind your own previous experiences and preconceived
notions about it. These five quick tips come up a lot in the methods unit of HCI. HCI research methods
are largely about understanding users, their motivations, and their tasks. So, we’ll talk much more
about this later, but it’s good to keep in mind now.
The ultimate goal of design in HCI is to create interfaces that are both useful and usable. Useful means
that the interface allows the user to achieve some task. But usefulness is a pretty low bar. For example,
a map is useful for finding your way from one place to another, but it isn't the most usable thing in the
world. You have to keep track of where you are. You have to plot your own route. And you have to do
all of this while driving the car. So before GPS navigation, people would often manually write down the
turns before they actually started driving somewhere they hadn't been before. So our big concern is
usability. That's where we get things like navigation apps. Notice how we have to focus on
understanding the task when we're performing design. If we set out to design a better map, we
probably wouldn't have ended up with a navigation app.
Throughout this unit I've repeatedly asked you to revisit the area of HCI that you chose to keep in mind
throughout our conversations. Now take a second and try to pull all those things together. You've
thought about how your chosen area applies to each of the models in the human's role, how it applies
to the various different design guidelines, and how it interacts with society and culture as a whole. How
does moving it through those different levels change the kinds of designs you have in mind? Are you
building it from low level interactions to high level effects? Are you starting at the top with a desired
outcome and working your way down to the individual operations? There are no right or wrong
answers here. The important thing is reflecting on your own reasoning process.
In looking at human-computer interaction, it's important that we understand the role that we expect
the human to play in this overall system.
Let's talk about three different possible types of roles the human can play, processor, predictor, and
participant.
First, we might think of the human as being nothing more than a sensory processor. They take input in
and they spit output out. They're kind of like another computer in the system, just one that we can't
A second way of viewing the human is to view them as a predictor. Here, we care deeply about the
human's knowledge, experience, expectations, and their thought process. That's why we call them the
predictor. We want them to be able to predict what will happen in the world as a result of some action
they take. So we want them to be able to map input to output. And that means getting inside their
head. Understanding what they're thinking, what they're seeing, what they're feeling when they're
interacting with some task. If we're taking this perspective, then the interface must fit with what
humans know. It must fit with their knowledge. It must help the user learn what they don't already
know and efficiently leverage what they do already know. And toward that end, we evaluate these
kind of interfaces with qualitative studies. These are often ex situ studies. We might perform task
analyses to see where users are spending their time. Or perform cognitive walk-throughs to understand
the user's thought process throughout some task. We can see pretty clearly that this view gives us
some advantages over viewing the user simply as a sensory processor, just as another computer in the
system. However, here we're still focusing on one user and one task. And sometimes that's useful. But
many times we want to look even more broadly than that. That's when we take the third participant
peel.
A third view on the user is to look at the user as a participant in some environment. That means we're
not just interested in what's going on inside their head. We're also interested in what's going on around
them at the same time, like what other tasks or interfaces they're using, or what other people they're
interacting with. We want to understand for example, what's competing for their attention? What are
their available cognitive resources? What's the importance of the task relative to everything else that's
going on? So if we take this view, then our interface must fit with the context. It's not enough that the
user is able to physically use the system and knows how to use the system. They must be able to
actually interact with the system in the context where they need it. And because context is so
important here, we evaluate it with in situ studies. We can't simply look at the user and the interface in
a vacuum. We have to actually view and evaluate them in the real world using the interface in
whatever context is most relevant. If we're evaluating a new GPS application, for example, we need to
actually go out and look at it in the context of real drivers driving on real roads. The information we get
from them using the app in our lab setting isn't as useful as understanding how they're going to actually
use it out in the real world. These are in situ studies, which are studies of the interface and the user
within the real complete context of the task.
[MUSIC] Good design, a GPS system that warns you 20 seconds before you need to make a turn.
>> Bad design, a GPS system that warns you two seconds before you need to make a turn.
Let's take a moment to reflect on when you've encountered these different views of the user in your
own history of interacting with computers. Try to think of a time when a program, an app or a device
clearly treated you as each of these types of users for better or for worse.
For me, we have a system at Udacity we use to record hours for those of us that work on some contract
projects. It asks us to enter the number of hours of the day we spend on each of a number of different
types of work. The problem is that, that assumes something closely resembling the processor model. A
computer can easily track how long different processes take. But for me, checking the amount of time
spent on different tasks can be basically impossible. Checking my e-mails involves switching between
five different tasks a minute. How am I suppose to track that? The system doesn't take into
consideration a realistic view of my role in the system. Something more similar to the predictor view
would be, well, the classroom you're viewing this in. Surrounding this video are a visual organization of
the lesson's content, a meter measuring your progress through the video, representations of the
video's transcript. These are all meant to equip you with the knowledge to predict what's coming next.
This classroom takes a predictor view of the user. It offloads some of the cognitive load onto the
interface allowing you to focus on the material. For the third view I personally would consider my alarm
clock an example. I use an alarm clock app called Sleep. It monitors my sleep cycles, rings at the
optimal time and tracks my sleep patterns to make recommendations. It understand its role as part of a
broader system needed to help me sleep. It goes far beyond just interaction between me and an
interface. It integrates into the entire system.
By my definition, user experience design is attempting to create systems that dictate how the user will
experience them. Preferably that the user will experience them positively. User experience in general,
though, is a phenomenon that emerges out of the interactions between humans and tasks via
interfaces. We might attempt to design that experience. But whether we design it or not, there is a
user experience. It's kind of like the weather, there's never no weather, there's never no user
experience. It might be a bad experience if we don't design it very well. But there's always some user
experience going on and it emerges as a result of the human's interactions with the task via the
interface. But user experience also goes beyond this simple interaction. It touches on the emotional,
personal, and more experiential elements of the relationship. We can build this by expanding our
understanding of the scope of the user experience. For just a particular individual, this is based on
things like the individual's age, sex, or race, personal experiences, gender, expectations for the
interface, and more. It goes beyond just designing an interface to help with a task. It touches on
whether the individual feels like the interface was designed for them. It examines whether they're
frustrated by the interface or joyous about it. Those are all parts of this user experience. We can take
this further and talk about user experience at a group level. We can start to think about how interfaces
lead to different user experiences among social or work groups. For example, I've known that school
reunions seem to be much less important to people who've graduated within the past 15 years. And I
hypothesize it's because Facebook and email have played such significant roles in keeping people in
touch. It's fundamentally changed the social to group user experience. Those effects can then scope all
the way up to the societal level. Sometimes these are unintended. For example, I doubt that the
creators of Twitter, foresaw when they created their tool, how it would play a significant role in big
societal changes like the Arab spring or, sometimes these might be intentional. For example, it was a
significant change when Facebook added new relationship statuses to its profiles to reflect things like
civil unions. That simultaneously reflected something that was already changing at the societal level.
But it also participated in that change and helped normalize those kinds of relationships. And that then
relates back to the individual by making sure the interface is designed such that each individual feels
So keeping in mind everything we've talked about, let's design something for Morgan. Morgan walks to
work, she likes to listen to audio books, mostly nonfiction. But she doesn't just want to listen, she
wants to be able to take notes and leave bookmarks. And do everything else you do when you're
reading. What would designing for her look like, from the perspectives of viewing her as a processor, a
predictor, and a participant? How much this different designs affect user experience as an individual, in
her local group of friends, and the society as a whole if the design caught on.
As a processor, we might simply look at what information is communicated to Morgan, when, and how.
As a participant, we might look at the broader interactions between this interface and Morgan's other
tasks and social activities. You might look at how increased access to books changes her life in other
ways. But really, this challenge is too big to address this quickly. So instead, let's return to this
challenge throughout our conversations, and use it as a running dialogue to explore HCI principles and
methods.
In this lesson we've covered some of the basic things you need to understand before we start talking
about design principles.
We've covered the idea that interface is mediated between users and tasks. And the best interfaces are
those that let the user spend as much time thinking about the task as possible.
We covered three views of the user and how those different views affect how we define usability and
evaluation.
[NOISE] Feedback cycles are the way in which people interact with the world, and then get feedback on
the results of those interactions. We'll talk about the ubiquity of those feedback cycles.
Then we'll talk about the gulf of execution, which is the distance between a user's goals and the
execution of the actions required to realize those goals.
We'll discuss seven questions we should ask ourselves when designing feedback cycles for users.
And we'll also look at applications of these in multiple areas of our everyday lives.
Feedback cycles are incredibly ubiquitous, whether or not there's a computational interface involved.
Everything from reading to driving a car to interacting with other people could be an example of a
feedback cycle in action. They're how we learn everything, from how to walk to how to solve a Rubik's
cube to how to take the third order partial derivative of a function. I assume, I've never done that. We
do something, we see the result, and we adjust what we do the next time accordingly. You may have
even seen other examples of this before, too.
If you've taken Ashok's and mine knowledge-based AI class, we talk about how agents are constantly
interacting with, learning from, and affecting the world around them. That's a feedback cycle.
In fact, if you look at some of the definitions of intelligence out there, you'll find that many people
actually define feedback cycles as the hallmark of intelligent behavior. Or they might define intelligence
as abilities that must be gained through feedback cycles. Colvin's definition, for example, involves
adjusting to one's environment, which means acting in it and then evaluating the results. Dearborn's
definition of learning or profiting by experience is exactly this as well. You do something and
experience the results, and learn from it. Adaptive behavior in general can be considered an example of
a feedback cycle. Behavior means acting in the world. And adapting means processing the results and
changing your behavior accordingly. And most generally, Schank's definition is clearly an ability gained
through feedback cycles, getting better over time based on evaluation of the results of one's actions in
the world. And Schank's general definition, getting better over time, is clearly something that can
happen as a result of participation in a feedback cycle. We find that nearly all of HCI can be interpreted
in some ways as an application of feedback cycles, whether between a person and a task, a person and
an interface, or systems comprised of multiple people and multiple interfaces.
In our feedback cycle diagram, we have on the left, some user and on the right, some task or system.
The user puts some input into the system through the interface and the system communicates some
output back to the user again through the interface. Incumbent on this are two general challenges, the
user's interaction with the task through the interface and the task's return to the user of the output via
the interface.
The first is called the Gulf of execution. The Gulf of execution can be defined as how do I know what I
can do. The user has some goals. How do they figure out how to make those goals a reality? How do
they figure out what actions to take to make the state of the system match their goal state? This is the
Gulf of execution. How hard is it to do in the interface what is necessary to accomplish the users' goals?
Or alternatively, what's the difference between what the user thinks they should have to do and what
they actually have to do.
Let's take a simple example of the gulf of execution. I'm making my lunch, I have my bowl of chili in the
microwave. My goal is simple, I want to heat it up. How hard is that? Well, typically when I've been
cooking in the past, cooking is defined in terms of the amount of time it takes. So, in the context of this
system, I specify my intent as to microwave it for one minute. Now what are the actions necessary to
do so? I press Time Cook to enter the time-cooking mode, I enter the time, one minute, and I press
Start. I didn't press Start just now, but I would press Start. I specified my intent, microwave for one
minute. I specified my actions, pressing the right sequence of buttons, and I executed those actions.
Could we make this better? There were a lot of button presses to microwave for just one minute. If we
think that's a common behavior, we might be able to make it simpler. Instead of pressing Time Cook
one, zero, zero and Start, I might just press one and wait. Watch. [NOISE] So I've narrowed the gulf of
execution by shrinking the number of actions required, but I may have enlarged it by making it more
difficult to identify the actions required. When I look at the microwave, Time Cook gives me an idea of
what that button does. So if I'm a novice at this, I can discover how to accomplish my goal. That's good
for the gulf of execution. It's easier to look at the button and figure out what to do than to have to go
look, read a manual, or anything like that and find out on your own. But once you know that all you
have to do is press one, that's much easier to execute. That's something nice about this interface, it
caters to both novices and experts, there's a hard and discoverable way and a short and visible way.
But let's rewind all the way back to the goal I set up initially, my goal was to heat up my chili. I specified
my intent in terms of the system as microwaving it for one minute. But was that the right thing to do?
After one minute, my chili might not be hot enough, this microwave actually has an automatic reheat
function that senses the food's temperature and stops when the time seems right. So the best bridge
over the gulf of execution might also involve helping me reframe my intention. Instead of going to
microwave for one minute, it might encourage me to reframe this as simply heating until ready and
letting the microwave do the rest.
Here are five quick tips for bridging gulfs of execution. Number 1, make functions discoverable. Imagine
a user is sitting in front of your interface for the very first time. How would they know what they can
do? Do they have to read the documentation, take a class? Ideally the functions of the interface would
be discoverable, meaning that they can find them clearly labelled within the interface. Number 2, let
the user mess around. You want your user to poke around and discover things so make them feel safe
in doing so. Don't include any actions that can't be undone, avoid any buttons that can irreversibly ruin
their document or setup. That way the user will feel safe discovering things in your interface. Number
3, be consistent with other tools. We all want to try new things and innovate, but we can bridge gulfs of
execution nicely by adopting the same standards that many other tools use. Use CTRL+C for copy and
CTRL+V for paste. Use a diskette icon for save even though no one actually uses floppy disks anymore.
This makes it easy for users to figure out what to do in your interface. Number 4, know your user. The
gulf of execution has a number of components, identifying intentions, identifying the actions to take,
and taking the actions. For novice users, identifying their intentions and actions are most valuable, so
making commands discoverable through things like menus is preferable. For experts though, actually
doing the action is more valuable. That's why many experts prefer the command line. Although it lacks
many usability principles targeted at novices, it's very efficient. Number 5, feed forward. We've talked
about feed back, which is a response to something the user did. Feed forward is more like feed back on
what the user might want to do. It helps the user predict what the result of an action will be. For
example, when you pull down the Facebook newsfeed on your phone, it starts to show a little refresh
icon. If you don't finish pulling down, it doesn't refresh. That's feedforward, information on what will
happen if you keep doing what you're doing. Many of these tips are derived from some of the
fundamental principles of design pioneered by people like Don Norman and Jakob Nielsen, and we'll
cover them more in another lesson.
The second challenge is for the task to express to the user through the interface the output of the
actions that the user took. This is called the gulf of evaluation because the user needs to evaluate the
new state of the system in response to the actions they took. Like the gulf of execution, we can think of
this in terms of three parts. There's the actual physical form of the output from the interface. What did
it actually do in response? There might be something visual, there might be a sound, a vibration, some
kind of output. The second is interpretation. Can the user interpret the real meaning of that output?
You might think of this in terms of a smartphone. If a smartphone vibrates in your pocket, can you
interpret what the meaning of that output was or you have to pull the phone out and actually see? And
then the third phase is evaluation. Can the user use that interpretation to evaluate whether or not
their goals were accomplished? You can imagine submitting a form online. It might give you output
that you interpret to mean that the form was received, but you might not be able to evaluate whether
or not the form was actually accepted. Once they've received and interpreted that output, the final
step is to evaluate whether or not that interpretation means that their goals were actually realized
within the system. Take our on demand video service example again, imagine that the user has gotten
all the way to finding the program that they want to watch and they've pressed the play button on their
remote. Imagine the interface responds by hiding the menus that they were using to navigate amongst
the service. Can they interpret the meaning of that output? And can they evaluate whether or not that
interpretation means that their goals were realized? If they're a novice user, maybe not. An expert
might correctly interpret that the screen blacking out is because the service is trying to load the video.
They then evaluate that interpretation and determine that their goals have been realized. The service is
trying to play the show they want to watch. But, a novice user might interpret that output to mean that
the service has stopped working at all, like when your computer just shuts down and the screen goes
black. They then incorrectly evaluate that their goals were not actually realized in the system. We
might get over this by showing some kind of buffering icon. That's a different kind of output from the
system that helps the user correctly interpret that the system is still working on the actions that they
put in. They then can evaluate that maybe their goals were correctly realized after all because the
system is still working to bring up their show. So, as you can see, each of these three stages presents
some unique challenges.
Let's take a thermostat, for example. I have a goal to make the room warmer, so I do something to my
thermostat with the intention of making the room warmer. What does the system do as a result? Well,
it turns the heat on, that would be the successful result of my action. But how do I know that the heat
was turned on? Well, maybe I can hear it, I might hear it click on. But that's a one time kind of thing
and it might be quiet. And if I'm mishearing it, I have no way of double checking it. So I'm not sure if I
heard it, and I have to go find a vent and put my hand on it and try to feel the heat coming out. And
there's more going on in a heater, it might have worked, but the heater doesn't immediately turn on for
one reason or the other. These are signs of a large gulf of evaluation. Neither the sound or the vent are
optimal displays because they're either hard to reach or possible to miss. Feeling the heat might be
easy to interpret, but hearing the heater turn on might not. So either way, I have to do a lot to evaluate
whether or not my action was successful. And this is all for a very small piece of feedback. Ideally if I
wasn't successful, we want the system to also tell me why I wasn't successful so I can evaluate what I
did wrong and respond accordingly. There's a very large gulf of evaluation if there's no indicator on the
actual thermostat. So how can we resolve that? Well, simple. We just mark on the thermostat that the
heat is on. That sounds trivial, but nothing in the fundamental design of this system demanded a note
like this. It's only in thinking about the system from the perspective of the user that we find that need.
I can let you know as well, this system still isn't very ideal. For various reasons, it'll turn the heater on or
the air conditioning off even when it hasn't reached the temperature I put in. And it gives me no
indication of why. I can look at the system and evaluate that the temperature is set to lower than the
current temperature in the room. But at the same time, I can see that the heater isn't on. Under those
circumstances, I have no way of knowing if the heater's malfunctioning, if the switch is wrong, or I don't
even know. In this case, it might just be that it's set to the wrong mode. The mode is visible, but after I
remembered to check it, it appears to be malfunctioning. We can imagine an alternative message on
the screen indicating the direction of the relationship or something similar that would give some sign
that it's currently set incorrectly.
Here are five quick tips for bridging gulfs of evaluation. Number one, give feedback constantly. Don't
automatically wait for whatever the user did to be processed in the system before giving feedback.
Give them feedback that the input was received. Give them feedback on what input was received. Help
the user understand where the system is in executing their action by giving feedback at every step of
the process. Number two. Give feedback immediately. Let the user know they've been heard even
when you're not ready to give them a full response yet. If they tap an icon to open an app, there
should be immediate feedback just on that tap. That way, even if the app takes awhile to open, the
user knows that the phone recognized their input. That's why icons briefly grey out when you tap them
on your phone. Number three. Match the feedback to the action. It might seem like this amount of
constant immediate feedback would get annoying and if executed poorly it really would. Subtle actions
should have subtle feedback. Significant actions should have significant feedback. Number four, vary
your feedback. It's often tempting to view our designs as existing solely on a screen and so we want to
give the feedback on the screen. But the screen is where the interaction is taking place, so visual
feedback can actually get in the way. Think about how auditory or haptic feedback can be used instead
of relying just on visual feedback. Number five, leverage direct manipulation. We talk about this a lot
more, but whenever possible, let the user feel like they're directly manipulating things in the system.
Things like dragging stuff around or pulling something to make it larger or smaller are very intuitive
actions. Because they feel like you're interacting directly with the content. Use that. Again, we talk far
more about this in another lesson, but it's worth mentioning here as well. By loading these things into
your short-term memory several times, we hope to help solidify them in your long-term memory. And
that relationship is actually something we also talk about elsewhere in this unit.
[MUSIC] Good design. A phone that quietly clicks every time a letter is successfully pressed to let you
know that the press has been received. Bad design. A phone that loudly shouts every letter you type.
>> P. I. C.
Remember small actions get small feedback. The only time you might want your device to yell a
confirmation at you is, if you’d just ordered a nuclear launch or something.
Let's pause for a second, and reflect on the roles of gulfs of execution and gulfs of evaluation in our own
lives. So try to think of a time when you've encountered a wide gulf of execution, and a wide gulf of
evaluation. This doesn't have to be a computer, it could be any interface. In other words, what was a
time when you were interacting with an interface, but couldn't think of how to accomplish what you
wanted to accomplish? What was a time when you were interacting with an interface and couldn't tell
if you'd accomplished what you wanted to accomplish?
It's not a coincidence that I'm filming this in my basement. This actually happened to me a few weeks
ago. The circuit to our basement was tripped, which is where we keep our modem, so our internet was
out. Now this is a brand new house and it was the first time we tripped a breaker, so I pulled out my
flashlight and I opened the panel. And none of the labels over here clearly corresponded to the breaker
I was looking for over here. I ended up trying every single one of them and still it didn't work. I shut off
everything in the house. Why didn't it work? In reality, there was a reset button on the outlet itself
that had to be pressed. The only reason we noticed it was because my wife noticed something out of
15 years ago we might not have talked about cars in the context of discussing HCI, but nowadays this is
basically a computer on wheels. So let's talk a little bit about how feedback cycles apply here. Let's start
with the ignition. The button that I start my car is right here. Why is it located there?
So we know where the ignition button is. Let's press it. [NOISE] Do you think the car turned on?
Well, what do we know? We know the car was off. We know this is clearly the power button based on
how it's labeled and where it's located. And most importantly, when I pressed it we heard kind of a
So now that we've seen the way this feedback cycle currently works, let's talk about improving it. How
might we make this feedback cycle even better? How might we narrow the gulf of execution and the
gulf of evaluation?
So here are a few ideas that I had. We know that the screen can show an alert when you try to turn the
car on without pressing the brake pedal down. Why not show that alert as soon as the driver gets in
In his book, Design of Everyday Things, Don Norman outlines seven questions that we should ask when
determining how usable a device is. Number one, how easily can one determine the function of the
device? Number two, how easily can one tell what actions are possible? Number three, how easily can
one determine the mapping from intention to physical movement? Number four, how easily can one
actually perform physical movement? Number five, how easily can one tell what state the system is in?
Number six, how easily can one tell if the system is in the desired state? And number seven, how easily
can one determine the mapping from system state to interpretation? You'll notice these match our
stages from earlier. Interpret you goal on the context of the devices function. Discern what actions are
possible on the device. Identify how to perform an action or what action you want to perform. Perform
the action. Observe the system's output. Compare the output to the desired state. And interpret the
difference between the output and the desired state.
I asked you earlier to pick an area of HCI in which you're interested, and reflect on it as you go through
this course. Depending on the area you selected, feedback cycles can play a huge number of different
roles. In healthcare, for example, feedback cycles are critical to helping patients manage their
symptoms, and that relies on the results of certain tests being easy to interpret and evaluate. Feedback
cycles are also present in some of the bigger challenges for gesture based interactions. It can be
difficult to get feedback on how a system interpreted a certain gesture and why it interpreted it that
way. Compare that to touch, where it's generally very easy to understand where you touched the
screen. So think for a moment about how feedback cycles affect the area you chose to keep in mind.
Lately I've encountered another interesting example of feedback cycles in action. You may have
actually seen this before as well. They're the new credit card readers. My wife sells arts and crafts at
local events, and so she has these Square readers that can scan credit cards on her phone. One version
lets you swipe, and the new version lets you insert the card. So let's check this out real quick. With the
swipe version you just insert the card and pull it through, just like a traditional card reader. The
problem is there's typically no feedback on whether you're swiping correctly. And what's more is you
can be wrong in both directions. You can be both too fast or too slow. So you may have had a time
when you were trying to swipe a credit card on some kind of reader, and you kept doing it more and
more slowly and deliberately, thinking that the problem was that you had done it too fast originally.
And then you discover that you've actually been going too slowly all along and your slowing down was
actually counterproductive. There's no feedback here and the space and acceptable input is bounded
on both sides. You have to go above one speed and below another speed. But now credit card readers
are moving to this model where you just insert the card. You try, at least. In terms of feedback cycles,
in what ways is this actually better?
So, using the insertion method is significantly easier. However, the insertion method introduces a new
problem. With the sliding method, I never had to actually physically let go of my card, so there was
little chance of me walking away without it. With the insertion method, I insert the card and I wait. I'm
not used to having to remember to retrieve my card from the card reader. Now this isn't quite as big a
deal with these new portable readers, but for the mounted ones you see in stores it can be far more
problematic. So how can we build some feedback into the system to make sure people remember their
cards when they walk away?
Now notice one last thing about this example. We've been discussing how to make the process of
sliding or swiping a credit card easier. What's wrong with that question?
The problem is that we're not focused on the right task. Our task shouldn't be to swipe a credit card, or
insert a credit card, or anything like that. Our task should be how to most easily pay for purchases. And
possibly the easiest way to do that would be to design a system that lets you just tap your phone
against the reader, this reader actually does that. That way, we can use the thing that people have on
them at all times. Now maybe that's isn't the best option for various other reasons, but the important
thing is we need to focus on what we're really trying to accomplish. Not just how we've done it in the
past. We can make incremental improvements just sliding or swiping or inserting a credit card all we
want. But we should always keep our eyes on the underlying task that the user needs to accomplish.
Today we've talked about arguably the most fundamental concept of human computer interaction,
feedback cycles. We describe feedback cycles for our purposes as the exchange of input and output
between a user and a system to accomplish some goal.
We talked about gulfs of evaluation, the distance between making some change in the system and
evaluating whether or not the goal was accomplished.
[MUSIC] Today we'll talk about two applications of good feedback cycles. Direct manipulation and
invisible interfaces. Direct manipulation is the principle that the user should feel as much as possible
like they're directly controlling the object of their task. So for example, if you're trying to enlarge an
image on your phone, it might be better to be able to drag it with your fingers rather than tapping a
button that says, zoom in. That way you're really interacting directly with the photo. New technologies
like touch screens are making it more and more possible to feel like we're directly manipulating
something, even when there's an interface in the way.
Our goal is to narrow the gulf of execution and the gulf of evaluation as much as possible. And arguably
the ultimate form of this is something called direct manipulation. Now today direct manipulation is a
very common interaction style. But in the history of HCI it was a revolutionary new approach. Now to
understand direct manipulation, let's talk about the desktop metaphor. The files and folders on your
computer are meant to mimic physical files and folders on a desktop. So, here are on my physical
desktop, I have some files. What do I do if I want to move them? Well, I pick them up and I move them.
What do I do if I want to put them in a folder? I pick them up and put them in the folder. I'm physically
moving the files from where they are to where I want them to be. If files and folders on a computer are
meant to mimic files and folders on a physical desk, then shouldn't the act of moving them also mimic
the real world action of moving them? Wouldn't it narrow the gulf execution to leverage that real world
experience and that real world expectation?
Files and folders on my computer are meant to mimic files and folders on my physical desk. So we
ideally want the action of moving them around on my computer to mimic the action of moving them
around on my physical desk. But it wasn't always like this. Before graphical user interfaces were
common, we moved files around using command line interfaces like this. The folder structure is still the
same on the operating system. But instead of visualizing it as folders and icons, I'm interacting with a
text-based command line. To view the files, I might need to type a command like ls, which I just have to
remember. If I don't remember that command, I don't have much recourse to go find out what I'm
supposed to be doing. To move a file, I need to type something like this. I have to type the command,
the file I want to move, and the folder I want to move it to. Again, if I forget the name of that
command, or the order of the parameters to provide, there's not a whole lot I can do to recover from
that. I need to run off to Google and find out what the correct order of the commands was. Which is
actually what I did before filming this video because I don't actually use the terminal very often. Then
when I execute that command, there's not a lot of feedback to let me know if it actually executed
correctly. I might need to change folders to find out. There I see the files present in that folder but I
had to go and look manually. There's nothing really natural about this. Now don't get me wrong, once
you know how to interact with this interface, it's very efficient to use. But when you're a novice at it,
when you've never used it before, this is completely unlike the task of managing physical files on your
real desk.
The Seminal Paper on Direct Manipulation Interfaces came out in 1985 coauthored by Edwin Hutchins,
James Hollan, and Don Norman. We'll talk a lot more about Hutchins and Norman later in our
conversations. Hutchins coauthored the foundational paper on distributed cognitive and Norman
created one of the most accepted sets of design principles in his seminal book, the Design of Everyday
Things. But in 1985, direct manipulation was starting to become a more common design strategy.
Hutchins, Hollan, and Norman identified two aspects of directness.
The first was distance. Distance is the distance between the users goals and the system itself. This is
the idea of gulfs of execution and evaluation that we talked about in the context of feedback cycles.
They write that "the feeling of directness is inversely proportional to the amount of cognitive effort it
takes to manipulate and evaluate the system." In other words, the greater the cognitive load required
to use the system, the less direct the interaction with the system actually feels.
This is brought together here in figure 6 from this paper. The user starts with some goals and translates
those goals into the intentions in the context of the interface. They then translate those intentions into
the form of the input, the actions, and execute those actions. The system then does something and
then gives back some form of output. The user then interprets the form of that output to discern the
meaning of that output, and then evaluates whether or not that meaning matches their goals. So to
take an example, when I brought up this figure, I needed to rotate the paper to display it correctly. That
was my goal. I translated that goal into the context of the application, a rotate option that was probably
hidden somewhere. I then identified the action, which was pressing the rotate button, and I executed
That's why the second component of this is direct engagement. What sets direct manipulation apart is
this second component. The authors of the paper write that "the systems that best exemplify direct
manipulation all give the qualitative feeling that one is directly engaged with the control of the objects—
not with the programs, not with the computer, but with the semantic objects of our goals and
intentions." If we're moving files we should be physically moving the representations of the files. If
we're playing a game, we should be directly controlling our characters. If we're navigating channels, we
should be specifically selecting clear representations of the channels that we want. That's what takes a
general, good feedback cycle and makes it an instance of direct manipulation.
Virtual reality right now is making some incredible strides in facilitating direct manipulation in places
where it just hasn't been possible before. Traditionally, when designers are designing stuff in 3D,
they're forced to use a 2D interface. That translation from 2D to 3D really gets in the way of directly
manipulating whatever is being designed. Through virtual reality though, designers are able to view
what they're designing in 3D as if it's in the room there with them. They can rotate it around with the
same hand motions you'd use to rotate a physical object. They can physically move around the object
to get different angles on it. So virtual reality is allowing us to bring the principle of direct manipulation
to tasks it hasn't been able to touch before. But there's still a lot of work to do. Gesture interfaces like
those used in virtual reality struggle with some feedback issues. We aim to make the user feel like
they're physically manipulating the artifact. But when you're working with something with you're
hands, it pushes back against you. How do we recreate that in virtual reality?
Take a moment real quick and reflect on some of the tasks you perform with computers day-to-day.
What are some of the places where you don't interact through direct manipulation? If you're having
trouble thinking of one, think especially about places where technology is replacing things you used to
do manually. Chances are, the physical interface was a bit closer to the task than the new technical
one. How can the technical interface better leverage direct manipulation?
When I was writing the script for this exact video, I was interrupted by a text message from a friend of
mine. And in the reply I was writing, I wanted to include a smiley face. We know that using emojis and
emoticons tends to humanize textual communication. On my phone, the interface for inserting an emoji
is to tap an icon to bring up a list of all the emojis and then select the one that you want. When I'm
reacting to someone in conversation, I'm not mentally scrolling through a list of all my possible
emotions and then choosing the one that corresponds. I'm just reacting naturally. Why can't my phone
capture that? Instead of having to select smiling from a list of emotions, maybe my phone could just
have a button to insert the emotion corresponding to my current facial expression. So to wink, I would
just wink. To frown, I would just frown. It wouldn't be possible to capture every possible face, but for
some of the most commonly used ones, it might be more efficient.
There may be no better example of the power of direct manipulation than watching a baby use an
interface. Let's watch my daughter, Lucy, try and use her Kindle Fire tablet. My daughter Lucy is 18
months old, yet when I give her an interface that uses direct manipulation, she's able to use it. She
wouldn't be able to use a keyboard or a mouse yet, but because she's directly interacting with the
things on the screen, she can use it.
Actually, there might be an even better example of direct manipulation in action. There are games
made for tablet computers for cats. Yes, cats can use tablet computers when they use direct
manipulation.
Let's try a quick exercise on direct manipulation. The Mac touchpad is famous for facilitating a lot of
different kinds of interactions. For example, I can press on it to click, press the two fingers to right-click.
I can pull up and down with two fingers to scroll up and down. I can double tap with two fingers to
scroll in and out a little bit. And I can pinch to zoom in and out a lot more. Which of these are good
examples of direct manipulation in action?
Now there's some room for disagreement here, but I think these five seem to be pretty cut and dry.
We can think about whether or not these are direct manipulation by considering whether or not what
we're doing to the touchpad is what we'd like to do to the screen itself. For clicking, I would consider
that direct manipulation because just as we press directly on the screen we're pressing directly on a
touchpad. Right-clicking though, the two finger tap, doesn't really exemplify direct manipulation,
because there's nothing natural about using two fingers to bring up a context menu as opposed to using
one to click. We have to kind of learn that behavior. Scrolling makes sense because with scrolling it's
like I'm physically pulling the page up and down to see different portions of it. The two finger tap for
The Mac Touchpad has some interesting examples of how you can make indirect manipulation feel
more direct. For example, if I swipe from the right to the left on the touchpad with two fingers, it pulls
up this notification center over on the right. This feels direct because the notification center popped up
in the same place on the screen that I swiped on the touchpad. The touchpad is almost like a miniature
version of the screen. But they could have placed a notification center anywhere and used any kind of
interaction to pull it up. This isn't like scrolling where there is something fundamental about the content
that demands a certain kind of interaction. They could have designed this however they wanted. But by
placing the notification center there and using that interaction to pull it up, it feels more direct. Now,
animation can also help us accomplish this. On the Touchpad, I can clear the windows off my desktop by
kind of spreading out my fingers on the touchpad, and the animation shows them going off to the side.
And while that's kind of like clearing off your desk, I'd argue it's not close enough to feel direct except
that the animation on screen mimics that action as well. The windows could have faded away or they
could have just slid to the bottom and still accomplish the same function of hiding what's on my
desktop. But the animation they chose reinforces that interaction. It makes it feel more direct. The
same thing actually applies with Launchpad, which we bring up with the opposite function by pinching
our fingers together. The animation looks kind of like we're pulling back a little bit or zooming out and
we see the launch pad come into view, just as the gesture is similar to zooming out on the screen. So
direct manipulation isn't just about designing interactions that feel like you're directly manipulating the
interface. It's also about designing interfaces that lend themselves to interactions that feel more direct.
Depending on the area of HCI that you chose to explore, direct manipulation might be a big open-
question. So for gesture based interaction for example, you're generally not actually touching anything.
Direct manipulation is contingent on immediate feedback that maps directly to the interaction. So how
do you create that in a gesture-based interface? This is a big challenge for virtual reality as well. Virtual
reality thrives on making you feel like you're somewhere else, both visually and auditorily, but it has a
long way to go kinesthetically. How do you create the feeling of direct manipulation based on physical
action when you can only give feedback visually or auditorily? So take a second and reflect on how
these principals of direct manipulation apply to your chosen area of HCI.
Whether through using direct manipulation, through innovative approaches to shrinking these gulfs or
through the user's patience and learning, our ultimate goal is for the interface between the user and
the task to become invisible.
What this means is that even though there is an interface in the middle, the user spends no time
thinking about it. Instead, they feel like they're interacting directly with the task rather than with some
interface.
So for example, I have a stylus and I'm going to write on this tablet computer. I'm interacting with an
interface just translating my drawing into data in the system. But for me, this feels just like I'm writing
on a normal page. That feels just like writing on paper. This interface between me and the data
representation of my drawing underneath is pretty much invisible. I feel like I'm writing on paper.
Bad design, interfaces that are literally invisible. Well, kind of, just your base interfaces are in one sense
literally invisible, that's actually why it's so important to give great feedback. Because otherwise, it's
tough to gauge the success of a gesture interaction.
We shouldn't fall into the trap of assuming that just because an interface has become invisible, the
design is great. Interfaces become invisible not just through great design, but also because users learn
to use them. With enough practice and experience, many users will become sufficiently comfortable
with many interfaces to feel invisibly integrated in the task. So take driving for example. Lets say I'm
driving a car and I discover I'm headed right for someone. What's my reaction? Well I turn the wheel to
the side and I press my brake. It's instinctive. I do it immediately, but think about that action. If I was
just running down the street and suddenly I saw someone in front of me would it be natural for me to
go like that? Of course not. The steering wheel was an interface I used to turn to the left. But it's
become invisible during the task of driving because of all my practice with it. But just because the
interface has become invisible doesn't mean it's great interface. People spend months learning to
drive, they pay hundreds of dollars for classes. And they have to pass a complicated test. Driving is
important enough that it can have that kind of learning curve. But for the interfaces that we design, we
generally can't expect users to give us an entire year just to learn to use them. We'll be lucky if they
give us an entire minute to learn to use them. So our goal is to make our interfaces invisible by design.
Our goal is to create interfaces that are invisible from the moment the user starts using them. They
should feel immediately as if they're interacting with the task underlying the interface. Now this is an
extremely tall order and one we honestly probably won't meet very often, but it's the goal. In fact, in
my opinion, this is why people tend to underestimate the complexity of HCI. When you do things right,
people won't be aware that you've done anything at all. So how do we create interfaces that are
invisible from the very first moment the user starts using them? That's precisely what we'll discuss in a
lot of our conversations about HCI. We'll talk about principles for creating interfaces that disappear like
leveraging prior expectations and providing quick feedback. We'll also talk a lot about how to get inside
the user's head and understand what they're seeing when they look at an interface that we can make
sure that they're internal mental model matches the system. In fact, if we consider invisibility to be a
hallmark of usable design, then this entire course could be retitled Creating Invisible Interfaces.
Here are five tips for designing invisible interfaces. Number 1, use affordances. We talk more about
affordances when we discuss design principles and heuristics. But affordances are places where the
visual design of the interface is just how it's supposed to be used. Buttons are for pressing. Dials are
for turning. Switches are for flicking. Use these expectations to make your interface more usable.
Number 2, know your user. Invisibility means different things to different people. Invisibility to a novice
means the interactions are all natural. But invisibility to an expert means maximizing efficiency. Know
for whom you're trying to design. Number 3. Differentiate your user. Maybe you're designing
something for both novices and experts. If that's the case, provide multiple ways of accomplishing tasks.
For example, having copy and paste under the Edit menu keeps those options discoverable. But
providing Ctrl C and Ctrl V as shortcuts keep those actions efficient. Number 4. Let your interface teach.
When we think of teaching users how to use our software we usually think of tutorials or manuals. But
ideally the interface itself will do the teaching. For example, when users select Copy and Paste from the
Edit menu, they see the hotkey that corresponds to that function. The goal is to teach them a more
efficient way of performing the actions without requiring them to already know that in order to do their
work. Number 5. Talk to your user. We'll say this over and over again, but the best thing you can do is
talk to the user. Ask them what they're thinking while they use an interface. Note especially whether
they're talking about the task or the interface. If they're talking about the interface, then it's pretty
visible.
Reflecting on where we've encountered invisible interfaces is difficult, because they were invisible.
What makes them so good is the fact that we didn't have to notice them. But give it a try anyway. Try
to think of a time where you picked up a new interface for the very first time and immediately knew
exactly how to use it to accomplish the task you had in mind.
One of my favorite examples of an interface that is invisible by design comes from a video game called
Portal 2. In lots of video games, you use a control stick to control the camera in game, but different
people have different preferences for how the camera should behave. Some feel if you press up, you
should look up. Others like myself, feel if you press down you should look up, more like you're
controlling an airplane with a joystick. In most games you have to set this manually by going to options,
selecting camera controls, and enabling or disabling a y axis and it's just a chore. But in portal two,
watch what happens. >> You will here a buzzer. When you hear the buzzer, look up at the ceiling.
[SOUND] Good. You will hear a buzzer. When you hear the buzzer, look down at the floor. [SOUND]
Good. This completes the gymnastic portion of your mandatory physical and mental wellness exercise.
>> Did you see that? It was so subtle you might not have even noticed it. A character in the game asked
For this design challenge, let's tackle one of the most common problems addressed in undergraduate
HCI classes, designing a better remote control. Now, these probably aren't very good interfaces. And
that's not to say that they're poorly designed, but the constraints on how many different things they
have to do and how limiting the physical structure can be, make these difficult to use. You might have
seen humorous images online of people putting tape over certain buttons on the controls to make them
easier to use for their parents or their kids. How would we design an invisible interface for universal
remote control, one that doesn't have the learning curves that these have?
Personally I think this is a great candidate for a voice interface. And in fact, Comcast, Amazon and
others have already started experimenting with voice interfaces for remote controls. One of the
challenges with voice interfaces is that generally the commands aren't very discoverable. Generally, if
you don't know what you can say, you have no way of finding out. But watching TV and movies is such a
normal part of our conversations that we already have a vocabulary of how to say what we want to do.
The challenge is for us designers to make a system that can understand that vocabulary. That way
when I say, watch Community, it understands that Community is a TV show and it tries to figure out, do
I grab it from the DVR, do I grab it from On Demand, do I see if it's on live? The vocabulary for the user
was very natural. So for example, watch Conan.
>> Well tonight, a fan named David Joiner thinks he caught a mistake. He says it happened when I was
telling a joke recently about Rand Paul.
Today we have talked about two applications of effective feedback cycles. Direct manipulation, and
invisible interfaces. We talked about how interfaces are most effective when the user has a sense that
they're directly manipulating the object in their task. We talked about how modern technology like
touch screens and virtual reality are making it possible for manipulation to feel more and more direct.
We talked about how the most effective interfaces become invisible between the user and their task.
It's just that the user spends no time at all thinking about the interface. And we talked about how
interfaces can become invisible via either learning or design and are most interested in designing them
to become invisible. To a large extent, that's the definition of usable design. Designing interfaces that
disappear between the user and their task.
[MUSIC] Human computer interaction starts with human. So it's important that we understand who the
human is, and what they're capable of doing. In this lesson, we're going to bring up some psychology of
what humans can do. We'll look at three systems. Input, processing, and output.
Input is how stimuli are sent from the world, and perceived inside the mind.
Output is how the brain then controls the individual's actions out in the world. Now, we're going to
cover a lot of material at a very high level. If you're interested in hearing more, I recommend taking a
psychology class, especially one focusing on sensation and perception. We'll put some recommended
courses in the notes.
In discussing human abilities, we're going to adopt something similar to the processor view of the
human. For now we're interested in what they can do, physically, cognitively, and so on.
So we're going to focus exclusively on what's going on over here. We're going to look at how the person
makes sense of input, and how they then act in the world. And right now, we're not going to worry too
much about where that input came from, or what their actions in the world actually do. Notice that in
this lesson we're discussing the human, almost the same way we discuss the computer, or the interface,
in most lessons. The human is something that produces output and consumes input, just like a
computer might be otherwise. But for now, we're only interested in how the human does this.
Let's start by talking a bit about what the average person can sense and perceive. So, here we have
Morgan again. Morgan has eyes. Morgan's eyes are useful for a lot of things.
The center of Morgan's eye is most useful for focusing closely on color or tracking movement. So, we
can assume that the most important details should be placed in the center of her view. Morgan's
peripheral vision is good for detecting motion, but it isn't as good for detecting color or detail. So while
we might use her periphery for some alerts, we shouldn't require her to focus closely on anything out
there.
As Morgan gets older, her visual acuity will decrease. So if we're designing something with older
audiences in mind, we want to be careful of things like font size. Ideally, these would be adjustable to
meet the needs of multiple audiences. All together though, Morgan's visual system is hugely important
to her cognition. The majority of concepts we cover in HCI are likely connected to visual perception.
[SOUND]
Morgan can discern noises based on both their pitch and their loudness.
Unlike vision, hearing isn't directional. Morgan can't close her ears or point her ears the wrong direction
so she can't as easily filter out auditory information. That might be useful for designing alerts, but it's
problematic for overwhelming her or sharing too much information with the people around her.
It can't feel at a distance but it can feel when things are touching right up against it.
But with touch screens, motion controls and virtual reality, touch needs to be more and more designed
explicitly into the system if we're to use it.
Let's design something for Morgan real quick. Let's tackle the common problem of being alerted when
you've received a text message. Here are the constraints on our design for Morgan. It must alert her
whether the phone is in her pocket or on the table. It cannot disturb the people around her. [SOUND]
And yes, vibrating loudly against the table counts as disturbing the people around her. You're not
restricted to just one modality, though, but you are restricted to the sensors that the phone has
available.
Here's one possible design. We know that smart phones have cameras and light sensors on them. We
can use that to determine where the phone is, and what kind of alert it should trigger. If the sensor
detects light, that means the screen is visible, so it might alert her simply by flashing its flashlight or
illuminating the screen. If the sensor does not detect light, it would infer that the phone is in her
pocket and thus, would vibrate instead. Now, of course, this isn't perfect. It could be in her purse, or
she could have put it face down. That's why we iterate on a design like this, to improve it based on the
user's experiences.
After the perception portion of this model comes the cognition portion, starting with memory. There
are lots of different models of memory out there. For our purposes we're going to talk about three
different kinds of memory, the perceptual store or working memory, the short-term memory, and the
long term memory. Some scientists argue that there are other types of memory as well, like an
intermediate sort of back of the mind memory. But the greatest consensus is around the existence of at
least these three kinds.
So let's start with the first, the perceptual store or the working memory. The perceptual store is a very
short term memory lasting less than a second. One of the most common models of working memory
came from Baddeley and Hitch in 1974. They described it as having three parts. First, there's the
visuospatial sketchpad which holds visual information for active manipulation. So for example, picture
a pencil. The visuospatial sketchpad is where you're currently seeing that pencil. A second part is the
phonological loop. The phonological loop is similar, but for verbal or auditory information. It stores the
sounds or speech you've heard recently, such as the sound of me talking to you right now. A third part
is the episodic buffer. The episodic buffer takes care of integrating information from the other systems
as well as chronological ordering to put things in place. Finally all three of these are coordinated by a
central executive.
When we're designing interfaces, short-term memory is one of our biggest concerns. It's important we
avoid requiring the user to keep too much stored in short-term memory at a time. Current research
shows that users can really only store four to five chunks of information at a time. For a long time,
there was a popular idea that people could store seven, plus or minus two, items in memory at a time.
But more recent research suggests that the number is really four to five chunks. There are two
principals we need to keep in mind here though. The first, is the idea of chunking. Chunking is grouping
together several bits of information into one chunk to remember.
So to illustrate this, let's try it out. I'm about to show you six combinations of letters. Try to memorize
them and then enter them into the exercise that pops up. Are you ready? Now to keep you from just
rehearsing it in your perceptual store until you can reenter it, I'm going to stall and show you some
pictures of my cats.
So what happened? Well, what likely happened is that you maybe had only a little bit of trouble
remembering the two real words that were listed on the right. You might have had some more trouble
remembering the two words that were listed in the middle that were fake words, but did nonetheless
look like real words. And you probably had a lot of trouble remembering the two series of letters over
on the left. Why is all of that? Well, when it came to memorizing these two words, you were just
calling up a chunk that you've already had. You didn't see these as arbitrary collections of letters, you
just saw them as words. For the ones in the middle, you've never seen those combinations of letters
before, but you could pronounce them as if they were words. So you likely saw them as fake words
rather than just random collection of letters. For these over the left though, you had to memorize five
individual characters. So that's means that while these four were able to jump in the words or pseudo
words. These likely had to be remembered as five chunks each. That's makes this much more difficult
to remember than these.
However, there is a way we can make this easier. So let's ask a different question. Which of these six
words did you see in the exercise previously?
Even if you had trouble just naming these series of letters in the exercise previously. You probably were
much more successful at this exercise. Why is that? That's because it's far easier to recognize
something you know than to recall it independently. And that's a useful take away for us as we design
interfaces. We can minimize the memory load on the user by relying more on their ability to recognize
things than to recall them.
So what are the implications of short term memory for HCI? We don't want to ask the user to hold too
much in memory at a time, four to five chunks is all. Asking the user to hold ten numbers in short term
memory, for example, would probably be too much. But we can increase the user's effective short term
memory capacity by helping them chunk things.
For example, this is probably by far easier to remember, even though it's the same content. We've
shrunk ten items into three. And we've used a format for phone numbers with which you're probably
familiar if you're in the United States. If you're from outside the U.S., you might be familiar with a
different grouping. But the same principle applies. And finally, when possible, we should leverage
recognition over recall. For example, if I ask you to recite the number, maybe you could. In fact, go
ahead. Try it.
Finally, we have long-term memory. Long-term memory is a seemingly unlimited store of memories, but
it's harder to put something into long-term memory, than to put it into short-term memory. In fact, to
load something into long-term memory, you generally need to put it into short-term memory several
times.
To demonstrate this, I'm going to describe something called lightner system. The lightner system is a
way of memorizing key value pairs, or in other words memorizing flashcards. Those can be words and
their definitions, countries and their capitals, laws and their formulas, anything where you're given a
key and asked to return a value. So I have some flashcards here that have the capitals of the world.
What I do is go through each one, read the country, check to see if I remember the capital. And if I do,
I'll put it in the pile on the right. I read the country and don't know the capital, I'll put it in the pile on
the left. So let me do that real quick. [MUSIC] Now tomorrow, I will just go through the pile on the left.
Any that I remember from the pile on the left tomorrow, I'll move to the pile on the right. Any that I
still don't remember, will stay in the pile on the left. So I'm focusing my attention on those that I don't
yet know. In four days, I'll go through the pile on the right. And any that I don't remember then, I'll
move back to my pile on the left, to remind me to go through them each day. So the things that I
remember least, are most often loaded in the short-term memory, solidifying them in my long-term
memory. Now in practice, you wouldn't just do this with two piles, you'd do a three, or four, or five.
And the long restoration pile you'd might only go through yearly, just to see if it's decayed yet.
Now let's talk a little bit about cognition. One of the most important cognitive processes to consider is
learning. When we design interfaces, we are in some ways we are hoping the user has to learn as little
as possible to find the interface useful. However, our interfaces should also teach the user over time
how to use them most efficiently. We can take PowerPoint as an example of this. Let's pretend this is
the first time I've ever used PowerPoint and I want to copy something. This application doesn't assume
I know anything yet. If I poke around, I'll find the Copy button under the Edit menu, which is off-screen
above the slide.
When I bring up that menu, it also shows the hot key that will allow me to actually copy something.
That's helping me to learn to interact with the application more efficiently through hot keys instead of
through menus. But yet, it's also not assuming that I already knew that, because I was still able to find
this under that menu.
You're unconsciously competent with what you're doing. When you're in that state, it can be difficult to
explain to someone who lacks that competence, because you aren't sure what makes you so good at it.
This is important, because as the designers of interfaces, we're the experts in our domains. That means
we're prone to design things that are easy for us to use but hard for anyone else.
To talk about cognitive load, let's think for a moment of the brain like it's a computer. The community
is actually divided on whether or not the brain actually operates this way, but for the purposes of this
explanation, it's a useful metaphor. So your brain has a certain number of resources available to it, the
same way your computer has a certain number of processor resources available to it. Each thing that
the brain is working on takes up some of those resources.
Let's say you're at home in a quiet area, working on a calculus problem that requires 60% of your
cognitive resources. In that setting, you have plenty of resources to solve that problem.
One, we want to reduce the cognitive load posed by the interface, so that the user can focus on the
task.
Second, we want to understand the context of what else is going on while users are using our interface.
We need to understand what else is competing for the cognitive resources users need to use our
interface. If we're designing a GPS or navigation system for example, we want to be aware that the
user will have relatively few cognitive resources because they're focusing on so many things at once.
Let's take a second and reflect on cognitive load. Try to think of a task where you've encountered a high
cognitive load. What different things did you have to keep in mind at the same time? And how could an
interface have actually helped you with this problem?
Computer programming is one task with an incredibly high cognitive load. At any given time, you're
likely holding in working memory your goals for this line of code, your goals for this function, your goals
for this portion of the program as a whole, the variables you've created and a lot more. That's why
there's so many jokes about how bad it is to interrupt a programmer, because they have so much in
working memory that they lose when they transition to another task. But there are ways good IDEs can
help mitigate those issues. For example, inline automated error checking is one way to reduce the
cognitive load on programmers, because it lets them focus more on what they're trying to accomplish
rather than the low level syntax mistakes. In that way, the IDE offloads some of the responsibility from
the user to the interface. Now we could phrase that a little bit differently too. We could describe this
as distributing the cognitive load more evenly between the different components of the system, myself
and the computer. That's a perspective we discuss when we talk about distributed cognition.
Here are five quick tips for reducing cognitive load in your interfaces. Number 1, use multiple
modalities. Most often, that's going to be both visual and verbal. But when only one system is engaged,
it's natural for it to become overloaded while the other one becomes bored. So describe things verbally
and also present them visually. Number 2, let the modalities complement each other.
Some people will take that first tip and use it as justification to present different content of the two
modalities. That actually increases cognitive load because the user has to try to process two things at
once, as you just noticed when Amanda put something irrelevant up while I said that.
Number 3, give the user control of the pace. That's more pertinent in educational applications of
cognitive load, but oftentimes interfaces have built-in timers on things like menus disappearing or
selections needing to be made. That dictates the pace, induces stress, and it raises cognitive load.
Instead, let the user control the pace. Number 4, emphasize essential content and minimize clutter.
The principal of discoverability says we want the user to be able to find the functions available to them,
but that could also raise cognitive load if we just give users a list of 500 different options. To alleviate
that, we can design our interfaces in a way that emphasizes the most common actions, while still giving
access to the full range of possible options. Number 5, offload tasks. Look closely at what the user has
to do or remember at every stage of the interface's operation. And ask if you can offload part of that
task on to the interface. For example, if a user needs to remember something that they entered on a
previous screen, show them what they entered. If there's a task they need to do manually, see if you
can trigger it automatically.
So our user has received some input. It's entered her memory, she cognitively processed it. Now it's
time to act in the world in response. In designing interfaces, we're also interested in what is physically
possible for users to do. This includes things like, how fast they can move, or how precisely they can
click or tap on something.
For example, here are two versions of the Spotify control widget that appears on Android phones. On
the left is the version that's available in the tray of the phone that you can access at any given time by
swiping down on the phone screen. And on the right is the version that appears on the lock screen
when you turn on a locked phone while it's playing music. In each case, the X closes the app, which is
consistent with a lot of other applications. The forward, back and pause buttons are similarly consistent
with their usual meanings. I don't actually know what the plus sign here does. It's doesn't have a clear
mapping to some underlying function. Now note on the left, we have the close button, in the top right
corner. It's far away from anything else in the widget. On the right, the close button is right beside the
skip button. I can speak from considerable personal experience, and say that the level of specificity or
the level of precision required to tap that X, instead of tapping the skip button, is pretty significant.
Especially if you're using this while running or driving, or anything besides just sitting there, interacting
directly with your phone. The precision of the user's ability to tap on a button is significantly reduced in
those situations. And in this case, that can lead to the quick error of closing the application when all
you're trying to do is skip forward to the next song. This isn't an error in the perception of the screen.
It's not an error in their memory of the controls. They're not thinking that the X button actually is the
skip button. This is just an error in what they're physically able to perform at a given time. The
interface relies on more precision than they would have in many circumstances. So this design doesn't
take into consideration the motor system of the user or the full context surrounding usage of this
We've talked a lot about different human abilities in this lesson. Depending on the demand you chose,
the human abilities in which you're interested, may vary dramatically. If you looking at gestural
interfaces or wearable devices than the limitations of the human motor system might be very
important. On the other hand, if you're interested in educational technology, you're likely more
interested in some of the cognitive issues surrounding designing technology. For virtual reality, you're
main concern will likely be perception. Although, there are interesting open questions about how we
physically interact with virtual reality as well. So take a few moments and reflect on what the limitations
of human ability are in the domain of HCI that you chose to explore.
Today, we've gone through a crash course on human abilities and perception. We started off by talking
about the main ways people perceive the world around them through sight, sound and touch.
Then we discussed some of the components of cognition, especially memory and learning.
[MUSIC] Over the many years of HCI development, experts have come up with a wide variety of
principles and heuristics for designing good interfaces. None of these are hard and fast rules like the law
of gravity or something. But they're useful guidelines to keep in mind when designing our interfaces.
Likely, the most popular and influential of these is Don Norman's six principles of design.
Larry Constantine and Lucy Lockwood have a similar set of principles of user interface design, with some
overlaps but also some distinctions.
And while those are all interested in general usability, there also exists a set of seven principles called
Principles of Universal Design. These are similarly concerned with usability, but more specifically for the
greatest number of people. Putting these four sets together, we'll talk about 15 unique principles for
interaction design.
In this lesson we're going to talk about four sets of design principles. These aren't the only four sets,
but they're the ones that I see referenced most often, and we'll talk about what some of the others
might be at the end of the lesson. In this book, The Design of Everyday Things, Don Norman outlined
his famous six design principles. This is probably the most famous set of design principles out there.
The more recent versions of the book actually have a seventh principle. But that seventh principle is
actually one of our entire lessons.
Jakob Nielsen outlines ten design heuristics in his book, Usability Inspection Methods. Many of
Norman's principles are similar to Nielsen's but there's some unique ones, as well. What's interesting,
is Norman and Nielsen went into business together and formed the Nielsen Norman Group, which is for
user experience training, consulting, and HCI research.
Finally, Ronald Mace of North Carolina State University proposed Seven Principles of Universal Design.
The Center for Excellence in Universal Design, whose mobile site is presented here, has continued
research in this area. These are a little bit different than the heuristics and principles presented in the
other three. While these three are most concerned with usability in general, universal design is
specifically concerned with designing interfaces and devices that can be used be everyone regardless of
age, disability, and so on. To make this lesson a little easier to follow, I've tried to merge these four sets
of principles into one larger set capturing the overlap between many of them.
In this lesson, we'll go through these 15 principles. These principles are intended to distill out some of
the overlap between those different sets.
Don Norman describes it by asking, is it possible to even figure out what actions are possible and where
and how to perform them? Nielsen has a similar principle. He advises us to minimize the user's memory
load by making objects, actions, and options visible. Instructions for use of the system should be visible
or easily retrievable whenever appropriate. In other words, when the user doesn't know what to do,
they should be able to easily figure out what to do. Constantine and Lockwood have a similar principle
called the visibility principle. The design should make all needed options and materials for a given task
visible without distracting the user with extraneous or redundant information. The idea behind all
three of these principles is that relevant function should be made visible, so the user can discover them
as opposed to having to read about them in some documentation or learn them through some tutorial.
Discoverability is one of the challenges for designing gesture-based interfaces. To understand this, let's
watch Morgan do some ordinary actions with her phone.
There's a lot of ways we might do this, from giving her a tutorial in advance, to giving her some tutoring
in context. For example, we might use the title bar of the phone to just briefly flash a message letting
the user know when something they've done could have been triggered by a gesture or a voice
command. That way, we're delivering instruction in the context of the activity. We could also give a log
of those so that they can check back at their convenience and see the tasks they could have performed
in other ways.
There often exists a tension between discoverability and simplicity. On the one hand, discoverability
means you need to be able to find things. But how can you find them if they're not accessible or visible?
That's how you get interfaces like this with way too many things visible. And ironically as a result, it
actually becomes harder to find what you're looking for because there's so many different things you
have to look at.
This is where the principle of simplicity comes in. Simplicity is part of three of our sets of principles,
Nielsen's, Constantine and Lockwood's, and the universal design principles.
Now in some ways, these principles are about designing interfaces but they cover other elements as
well. One example of this is the infamous blue screen of death from the Windows operating systems.
On the left we have the blue screen of death as it appeared in older versions of Windows. And on the
right we have how it appears now on Windows 10. There are a lot of changes here. The blue is softer
and more appealing. The description of the error is in plain language. But the same information is still
provided, it's just de-emphasized. This is a nice application of Nielsen's heuristic. The user should only
be given as much information as they need. Here, the information that most users would need, which
is just that a problem occurred and here's how close I am to recovering from it, are presented more
prominently than the detailed information that might only be useful to an expert.
Navigating the sign on the left is pretty much impossible. But it's pretty easy to interpret the one on the
right. The universal design principle of simplicity is particularly interested in whether or not people of
different experiences, levels of knowledge, or languages can figure out what to do. Navigating this sign
requires a lot of cognitive attention and some language skills. Whereas I would hypothesize that even
someone who struggles with English might be able to make sense oft the sign on the right. These two
signs communicate the same information, but while the one on the left requires a lot of cognitive load
and language skills, the one on the right can probably be understood with little effort and little
experience.
One way to keep design both simple and usable is to design interfaces that by their very design tell you
how to use them. Don Norman describes these as affordances. The design of a thing affords or hints at
the way it's supposed to be used. This is also similar to the familiarity principle from Dicks, et al. This is
extremely common in the physical world because the physical design of objects is connected to the
physical function that they serve. Buttons are meant to be pressed. Handles are meant to be pulled.
Knobs are meant to be turned. You can simply look at it and understand how you're supposed to use it.
The challenge is that in the virtual computer world, there's no such inherent connection between the
physical design and the function of an interface, the way you might often find in the real world. For
example, when I mouse over a button in my interface, the style that appears around it makes it look like
it's elevated. Makes it look like it's popping out of the interface. That affords the action of then pushing
it down, and I know that I need to click it to push it down.
Here this color picture does a good job of this. The horizontal line effectively shows us the list of options
available to us. The placement of the dial suggests where we are now. And there's this kind of implicit
notion that I could drag this dial around to change my color. We can also leverage metaphors or
analogies to physical devices. You can imagine that if this content was presented like a book, I might
scroll through it by flicking to the side, as if it's a page. You may have seen interfaces that work exactly
like that. Theres no computational reason why that should mean go to the next page or that should
mean go back a page. Except that it makes sense in the context of the physical interface it's meant to
mimic. We swipe in a book so let's swipe in a book-like interface.
Norman and Nielsen both talk about the need for a mapping between interfaces and their effects in the
world.
Norman notes that mapping is actually a technical term coming from mathematics that means a
relationship between the elements of two sets of things. In this case, our two sets are the interface and
the world. For example, these book icons might help you map these quotes to the books from which
they were taken. Nielsen describes mapping by saying the system should speak the users' language,
with words, phrases, and concepts that are familiar to the user, rather than system-oriented terms.
Follow real-world conventions, making information appear in a natural and logical order. A great
example of this, is the fact that we call cut, copy, and paste, cut, copy and paste. Surely there could
have been more technical terms like duplicate instead of copy. But using cut, copy, and paste forms a
natural mapping between our own vocabulary and what happens in the system. Note that these two
principles are subtly different, but they're actually strongly related. Nielsen's heuristic describes the
general goal, while Norman's principle describes one way to achieve it. Strong mappings help make
information appear in natural and logical order.
We can see that difference in our color meter again. Affordances were about creating interfaces where
their designs suggested how they're supposed to be used. The placement of this notch along this
horizontal bar, kind of affords the idea that it could be dragged around. The horizontal bar visualizes
the space which makes it seem like we could move that notch around to set our color. However, that
design on its own wouldn't necessarily create a good mapping. Imagine, if instead of the bar fading
from white to black, it was just white the entire way. It would still be very obvious how you're
supposed to use it. But it wouldn't be obvious what the effect of using it would actually be. It's the
present of that fade from white to black that makes it easier to see what will happen if I actually drag
A good example of the difference between affordances and mappings is a light switch. A light switch
very clearly affords how you're supposed to use it. You're supposed to flip it. But these switches have
no mapping to what will happen when I switch them. I can look at it and clearly see what I'm supposed
to do. But I can't tell what the effect is going to be in the real world.
Contrast with the dials on my stove. There are four dials and each is augmented with this little icon that
tells you which burner is controlled by that dial. So there's a mapping between the controls and the
effects. So how would you redesign these light switches to create not only affordances but also
mappings. If relevant, this one turns on the breakfast room light, this one turns on the counter light
and this one turns on the kitchen light.
There are a few things we could do actually. Maybe we could put a small letter next to each light switch
that indicates which light in the room that switch controls. K for kitchen, C for counter top, B for
Our next principle is perceptibility. Perceptibility refers to the user's ability to perceive the state of the
system.
Nielsen states that the system should always keep users informed about what is going on, through
appropriate feedback within reasonable time. That allows the user to perceive what's going on inside
the system. Universal design notes that the design should communicate necessary information
effectively to the user, regardless of ambient conditions or the user's sensory abilities. In other words,
everyone using the interface should be able to perceive the current state. Note that this is also similar
to Norman's notion of feedback. He writes that feedback must be immediate, must be informative, and
that poor feedback can be worse than no feedback at all. But feedback is so ubiquitous, so general, that
really, feedback could be applied to any principle we talk about in this entire lesson. So instead we're
going to reserve this more narrow definition for when we talk about errors. And our lesson on
feedback cycles covers the idea of feedback more generally. Things like light switches and oven dials,
actually do this very nicely. I can look at a light switch and determine whether the system it controls is
on or off, based on whether the switch is up or down. Same with the oven dial. I can immediately see
where the dial is set. But there's a common household control, that flagrantly violates this principle of
perceptibility.
Consistency is a principle from Norman, Nielsen, and Constantine and Lockwood. It refers to using
controls, using visualizations, using layouts, using anything we use in our interface design consistently,
across both the interfaces that we design and what we design more broadly as a community.
Norman writes that consistency in design is virtuous. It's a powerful word, there. It means that lessons
learned with one system transfer readily to others. If a new way of doing things is only slightly better
than the old, it's better to be consistent. Of course there will be times when new ways of doing things
will be significantly better then the old. That is how we actually make progress, that is how we advance.
If we are only making tiny little iterative improvements, it might be better to stick to the old way of
doing things, because users are used to it. They are able to do it more efficiently. Nielsen writes that
users should not have to wonder whether different words, situations, or actions mean the same thing.
Follow platform conventions. In other words, be consistent with what other people have done on the
same platform, in the same domain, and so on. Constantine and Lockwood describe consistency as
reuse. They say the design should reuse internal and external components and behaviors, maintaining
consistency with purpose rather than merely arbitrary consistency, thus reducing the need for users to
rethink and remember. That means that we don't have to be consistent with things that don't really
impact what the user knows to do. The color of the window, for example, isn't going to change whether
the user understands what the word copy means in the context of an interface. But changing the word
copy to duplicate might force users to actually rethink and remember what that term means. In some
cases, that might be a good thing. If duplicate actually does something slightly different than copy, then
One great example of following these conventions are the links we use in text on most websites. For
whatever reason, an early convention on the internet was for links to be blue and underlined. Now
when we want to indicate to users that some text is clickable, what do we do? Generally, we might
make it blue and underline it. Sometimes we change this, as you can see here. Underlining has actually
fallen out of fashion in a lot of places and now we just use the distinct text color to indicate a link that
can be clicked. On some other sites, the color itself might be different. It might be red against blue text
instead of blue against black. But the convention of using a contrasting color to mark links has
remained and the most fundamental convention is still blue underlines. Again, there's no physical
reason why links need to be blue, or why they even need to be a different text color at all. But that
convention helps users understand how to use our interfaces. If you've used the internet before and
then visit Wikipedia for the first time, you'll understand that these are links without even thinking about
it. Most of the interfaces we design will have a number of functions in common with other interfaces.
So by leveraging the way things have been done in the past, we can help users understand our
interfaces more quickly.
One of my favorite examples of how consistency matters comes from Microsoft’s Visual Studio
development environment. And to be clear: I adore Visual Studio, so I’m not just piling onto it. As you
see here, in most interfaces, Ctrl+Y is the ‘redo’ hotkey. If you hit undo one too many times, you can hit
Ctrl+Y to ‘redo’ the last ‘undone’ action.
But in Visual Studio, by default it’s… Shift+Alt+Backspace. What?! And what’s worse than this is Ctrl+Y
is the ‘delete line’ function, which is a function I had never even heard of before Visual Studio. So, if
you’re pressing Ctrl+Z a bunch of times to maybe rewind the changes you've made lately, and then you
press Ctrl+Y out of habit because that's what every other interface uses for redo, the effect is that you
delete the current line instead of redoing anything and that actually makes a new change, which means
you lose that entire tree of redoable actions. Anything you've undone now can't be recovered. It’s
infuriating. And yet, it isn’t without its reasons.The reason: consistency.
Depending on your expertise with the computers, there's a strong chance you've found yourself on one
side or the other of the following exchange. Imagine one person is watching another person use a
computer. The person using the computer repeatedly right-clicks and selects Cut to cut things. And
then right-clicks and selects Paste to paste them back again. The person watching insists that they can
just use Ctrl+X and Ctrl+V. The person working doesn't understand why the person watching cares. The
person watching doesn't understand why the person working won't use the more efficient method.
And in reality, they're both right.
This is the principal of flexibility. These two options are available because of the principle of flexibility
from both Nielsen's heuristics and the principles of universal design.
The principle of flexibility in some ways appears to clash with the principle of equity. But both come
from the principles of universal design.
The principle of flexibility said the design should accommodate a wide range of individual preferences
and abilities. But the principle of equity says the design is useful and marketable to people with diverse
abilities, and it goes on to say we should provide the same means for all users, identical whenever
possible and equivalent when not. And we should avoid segregating or stigmatizing any users. Now, in
some ways, these systems might compete. This says we should allow every user to use the system the
same way, whereas this one says that we should allow different, flexible methods of interacting with
the system. In reality, though, these are actually complementary of one another. Equity is largely about
helping all users have the same user experience, while flexibility might be a means to achieve that. For
example, if we want all our users to enjoy using our interface, keeping things discoverable for novice
users and efficient for expert users allows us to accommodate a wide range of individual preferences
and abilities. User experience in this instance means treating every user like they're within the target
audience and extending the same benefits to all users, including things like privacy and security. We
might do that in different ways, but the important note is that the experience is the same across all
users. That's what equity is about.
Ease and comfort are two similar ideas that come from the principles of universal design. And they also
relate to equitable treatment, specifically in terms of physical interaction.
The ease principle, which interestingly uses the word comfort, says the design can be used efficiently
and comfortably and with a minimum amount of fatigue. The comfort principle notes that appropriate
size and space is provided for approach, reach, manipulation and use regardless of the user's body size,
posture or mobility. Now, in the past, these principles didn't have an enormous amount of application
to HCI. Because we generally assume that the user was sitting at their desk with a keyboard and a
monitor. But as more and more interfaces are becoming equipped with computers, we'll find HCI
dealing with these issues more and more. For example, the seat control in your car might now actually
be run be a computer that remembers your settings and restores them when you get back in the car.
That's an instance of HCI trying to improve user ease and comfort in a physical area.
The structure principle is concerned with the overall architecture of a user interface. In many ways, it’s
more closely related to the narrower field of user interface design than HCI more generally.
It comes from Constantine and Lockwood and they define it as their structure principle which says that
design should organize the user interface purposefully, in meaningful and useful ways based on clear,
consistent models that are apparent and recognizable to users, putting related things together and
separating unrelated things, differentiating dissimilar things and making similar things resemble one
another. That's a long sentence. But what it really says is we should organize our user interfaces in
ways that helps the user's mental model match the actual content of the task. What’s interesting to me
about the structure principle is that it borrows from a form of UI design that predates computers
considerably. We find many of the principles we learned in designing newspapers and textbooks apply
nicely to user interfaces as well.
In designing user interfaces our goal is typically to make the interface usable. And a big part of usability
is accounting for user error.
Many design theorists argue that there's actually no such thing as user error. If the user commits an
error it was because the system was not structured in a way to prevent or recover from it. And I
happen to agree with that. Now, one way we can avoid error is by preventing the user from performing
erroneously in the first place. This is the idea of constraints. Constraining the user to only performing
the correct actions in the first place.
Our password reset screen actually does this pretty well. First, it shows us the constraints under which
we're operating right there visibly on the screen so we're not left guessing as to what we're supposed
to be doing. Then, as we start to interact, it tells us if we're violating any of those constraints.
Norman takes us a step further though, when he breaks down constraints into four sub-categories.
These aren't just about preventing wrong input. They're also about insuring correct input. They're
about making sure the user knows what to do next. Physical constraints are those that are literally
physically prevent you from performing the wrong action. A three-prong plug, for example, can only
physically be inserted in one way, which prevents mistakes. USB sticks can only be physically inserted
one way all the way. But the constraint doesn't arise until you've already tried to do it incorrectly. You
can look at a wall outlet and understand if you're trying to put it incorrectly. But it's harder to look at a
USB and know whether you're trying to insert it the right way. A second kind is a cultural constraint.
These are those rules that are generally followed by different societies, like facing forward on
escalators, or forming a line while waiting. In designing we might rely on these, but we should be
careful of intercultural differences. A third kind of constraint is a semantic constraint. Those are
constraints that are inherent to the meaning of a situation. They're similar to affordances in that regard.
For example, the purpose of a rear view mirror is to see behind you. So therefore, the mirror must
reflect from behind, it's inherent to the idea of a rear view mirror, that it should reflect in a certain way.
In the future that meaning might change, autonomous vehicles might not need mirrors for passengers,
so the semantic constraints of today, might be gone tomorrow. And finally the fourth kind of constraint
is a logical constraint. Logical constraints are things that are self-evident based on a situation, not just
based on the design of something like a semantic constraint, but based on the situation at hand. For
example, imagine building some furniture. When you reach the end, there's only one hole left, and only
one screw. Logically, the one screw left is constrained to go in the one remaining hole. That's a logical
constraint.
A lot of the principles we talk about are cases where you might never even notice if they've been done
well. There are principles of invisible design, where succeeding allows the user to focus on the
underlying tasks. But constraints are different. Constraints actively stand in the user's way and that
means they've become more visible. That's often a bad thing, but in the case of constraints it serves
the greater good. Constraints might prevent users from entering invalid input or force users to adopt
certain safeguards. So of all the principles we've discussed, this might be the one you've noticed. So
take a second, and think. Can you think of any times you've encountered interfaces that had
constraints in them?
I have kind of an interesting example of this. I can't demonstrate it well because the car has to be in
motion, but on my Leaf there's an option screen, and it lets you change the time and the date, and
some other options on the car. And you can use that option screen until the car starts moving. But at
that point, the menu blocks you from using it, saying you can only use it when the car is at rest. That's
for safety reasons. They don't want people fiddling with the option screen while driving. What makes it
interesting, though, is it's a constraint that isn't in the service of usability, it's in the service of safety.
The car is made less usable to make it more safe.
We can't constrain away all errors all the time though. So there are two principles for how we deal with
errors that do occur, feedback and tolerance. Tolerance means that users shouldn't be at risk of causing
too much trouble accidentally.
For this, Nielsen writes that users often choose system functions by mistake. And will need a clearly
marked emergency exit to leave the unwanted state without having to go through an extended
dialogue. Support undo and redo. For Constantine and Lockwood this is the tolerance principle. They
write the design should be flexible and tolerant, reducing the cost of mistakes and misuse by allowing
undoing and redoing, while also preventing errors wherever possible. It should be becoming clearer,
why that control Y issue with Visual Studio, was so significant. Undo and redo are fundamental
concepts of tolerance. And that control Y issue, where control Y removes the line in Visual Studio gets
in the way of redo allowing us to recover from mistakes. For Dix et al, this is the principle of
recoverability. Now, Nielsen's definition is most interested in supporting user experimentation. The
system should tolerate users poking around with things. Universal Design simply says, the design
minimizes hazards and the adverse consequences of accidental or unintended actions. Dix et al also
refers to this as the principle of recoverability. Now Nielsen's definition is most interested in supporting
user experimentation. They system should tolerate users poking around with things. That actually
enhances the principle of discoverability because if the user feels safe experimenting with things
they're more likely to discover what's available to them. The principles from Constantine and Lockwood
and the principles of Universal Design are more about recovering from traditional mistakes.
Second, the system should give plenty of feedback so that the user can understand why the error
happened and how to avoid it in the future.
Norman writes that feedback must be immediate and it must be informative. Poor feedback can be
worse than no feedback at all. Because it's distracting, uninformative, and, in many cases, irritating and
anxiety-provoking. If anything has ever described the classic Windows Blue Screen of Death, it's this.
It's terrifying. It's bold. It's cryptic. And it scares you more than it informs you. Nielsen writes that
error messages should be expressed in plain language (no codes), precisely indicate the problem, and
constructively suggest a solution. Note this tight relationship with recoverability. Not only should it be
possible to recover from an error, the system should tell you exactly how to recover from an error.
That's feedback in response to errors. For Constantine and Lockwood, this is the feedback principle.
The design should keep users informed of actions or interpretations, changes of state or condition, and
errors or exceptions... through clear, concise, and unambiguous language familiar to users. Again, the
old Windows blue screen of death doesn't do this very well. Because the language is not familiar, it's
not concise, and it doesn't actually tell you what the state or condition is. The new one does a much
better job of this. Notice as well that Norman, Constantine, and Lockwood are interested in feedback
more generally, not just in response to errors. That's so fundamental that we have an entire lesson on
feedback cycles that really is more emblematic of the overall principle of feedback. Here we're most
interested in feedback in response to errors, which is a very important concept on its own.
Finally, Nielsen has one last heuristic regarding user error, documentation. I put this last for a reason,
one goal of usable design is to avoid the need for documentation altogether. We want users to just
interact naturally with our interfaces. In modern design, we probably can't rely on users reading our
documentation at all unless they're being required to use our interface altogether.
And Nielsen generally agrees. He writes that even though it's better if the system can be used without
documentation, it may be necessary to provide help and documentation. Any such information should
be easy to search, focused on user's task, list concrete steps to be carried out, and not be too large. I
feel modern design as a whole has made great strides in this direction over the past several years.
Nowadays, most often, when you use documentation online or wherever you might find it, it's framed
in terms of tasks. You input what you want to do, and it gives you a concrete list of steps to actually
carry it out. That's a refreshing change compared to older documentation, which was more dedicated
to just listing out everything a given interface could do without any consideration to what you were
actually trying to do.
We've talked about a bunch of different design principles in this lesson. How these design principles
apply to your design tasks will differ significantly based on what area you're working in. In gestural
interfaces, for example, constraint present a big challenge because we can't physically constrain our
users movement. We have to give them feedback or feedforward in different ways. If we're working in
particularly complex domains, we have to think hard about what simplicity means. If the underlying
task is complex, how simple can and should the interface actually be? We might find ourselves in
domains with enormous concerns regarding universal design. If you create something that a person
with a disability can't use, you risk big problems, both ethically and legally. So take a few moments and
reflect on how these design principles apply to the area of HCI that you've chose to investigate.
So I've attempted to distill the 29 combined principles from Norman, from Nielsen, Constantine,
Lockwood, and the Institute for Universal Design into just these 15.
Here you can see where each of these principles comes from. I do recommend reading the original four
lists to pick up on some of the more subtle differences between these principles that I grouped together,
especially perceptibility, tolerance, and feedback. Note also that in more recent editions, Norman has
one more principle: conceptual models. That's actually the subject of an entire lesson in this course.
These also certainly aren't the only four sets of design principles. There are several more.
We'll talk about their learnability principles when we talk about mental models. Jill Gerhardt-Powels has
a list of principles for cognitive engineering and especially at reducing cognitive load. Her list has some
particularly useful applications for data processing and visualization. In the Human Interface, Jeff Raskin
outlines some additional revolutionary design rules. I wouldn't necessarily advocate following them, but
they're interesting to see a very different approach to things. In Computer Graphics Principles and
Practice, Jim Foley and others give some principles that apply specifically to 2D and 3D computer
graphics. And finally Susan Weinschenk and Dean Barker have a set of guidelines that provide an even
more holistic view of interface design, including things like linguistic and cultural sensitivity, tempo and
pace, and domain clarity. And even these are only some of the additional lists. There are many more
that I encourage you to look into.
In this lesson, I’ve tried to take the various different lists of usability guidelines from different sources
and distill them down to a list you can work with. We combined the lists from Don Norman, Jakob
Nielsen, Larry Constantine, Lucy Lockwood, and the Institute for Universal Design into fifteen principles.
Now remember, though, these are just guidelines, principles, and heuristics: none of them are
unbreakable rules. You’ll often find yourself wrestling with tensions between multiple principles. There
will be something cool you’ll want to implement, but only your most expert users will be able to
understand it. Or, there will be some new interaction method that you want to test, but you aren’t sure
how to make it visible or learnable to the user. These principles are things you should think about when
designing, but they only get you so far. You still need needfinding, prototyping, and evaluation to find
what actually works in reality.
Introduction
[MUSIC] Today we're going to talk about mental models and representations. A mental model is the
understanding you hold in your head about the world around you. Simulating a mental model allows
you to make predictions and figure out how to achieve your goals out in the real world. A good
interface will give the user a good mental model of the system that it presents.
In order to develop good medal models we need to give users good representations of the system with
which they're interacting. In that way, we can help users learn how to use our interfaces as quickly as
We'll start by talking about mental models in general and how they apply to the interfaces with which
we're familiar.
Then we'll talk about how representations can make problem solving easier or harder.
Then we'll discuss how user error can arise either from inaccuracies or mistakes in the user's mental
model, or just from accidental slips, despite an accurate mental model.
A mental model is a person's understanding of the way something in the real world works. It's an
understanding of the processes, relationships and connections in real systems. Using mental models
we generate expectations or predictions about the world and then we check whether the actual
outcomes match our mental model. So I'm holding this basketball because generally, we all probably
have a model of what will happen if I try to bounce this ball. [SOUND] It comes back up. [SOUND] You
didn't have to see it come up to know what would happen. You use your mental model of the world to
simulate the event. And then you use that mental simulation to make predictions. When reality doesn't
match with our mental model, it makes us uncomfortable. We want to know why our mental model
was wrong. Maybe it makes us curious. But when it happens over and over, it can frustrate us. It can
make us feel that we just don't and never will understand. As interface designers, this presents us with
a lot of challenges. We want to make sure that the users mental model in our systems matches the way
our systems actually work.
Mental models are not a uniquely HCI principle. In fact, if you search for mental models online, you’ll
probably find just as much discussion of them in the context of education as the context of HCI. And
that’s a very useful analogy to keep in mind. When you’re designing an interface, you’re playing very
much the role of an educator. Your goal is to teach your user how the system works through the design
of your interface. But unlike a teacher, you don’t generally have the benefit of being able to stand here
and explain things directly to your user. Most users don’t watch tutorials or read documentation. You
have to design interfaces that teach users while they’re using them. That’s where representations will
come in. Good representations show the user exactly how the system actually works. It’s an enormous
challenge, but it's also incredibly satisfying when you do it well.
So let's talk a little bit about mental models in the context of the climate control systems we see on
automobiles. So this is my old car, it's a 1989 Volvo. It, sadly, does not run anymore. But let's talk
about how the climate control system would work back when it did run.
So, it's a hot day outside. Looking at these controls, how would you make the air temperature colder
and the air come out faster? The natural thought to me would be to turn the fan speed up over on the
right, and the air temperature to the blue side over on the left. But this doesn't actually make the
temperature any colder, it just disables the heat. This dial over here in the top right, has to be turned
on to make the air conditioning actually work. So just turning this thing over to the blue side doesn't
actually turn on the air conditioning. So to make it colder, you have to both slide this lever over to the
left, and turn this dial to the red area. It's kind of hard to see in this, but this little area over here on
the left side of this dial is actually red. The red area on the air conditioning control designates the
maximum coldness. What? This also means you can turn on both the heat and the air conditioning at
So, here's the climate control system from my Leaf. I have one dial, that turns the fan speed up and
down. One dial that turns the temperature of the air coming out up and down. And so as far as that's
concerned, it's pretty simple. But this interface still has some things that are pretty confusing. So for
example, it has an automatic mode, where it tries to adjust the air temperature and the fan speed to
bring the temperature of the car to the temperature that I want. So I press auto. Now it's going to
change the fan speed, and change the air temperature, if I didn't already have it at the lowest, to try
and get the car cooler faster. The problem is that I want to turn auto off, I don't actually know how to
do it. Pressing auto doesn't actually turn it off. If I turn it so that it doesn't only circulate air inside the
car, then it turns auto off. But that might not be what I wanted. Maybe I wanted it to go to a certain air
temperature without just circulating the air in the car. I turn auto back on, it's going to turn that back
on. Don't know why. So right now, as far as I know, the only way to turn auto off is to turn the
circulation mode off. It also lets me turn AC and heat on at the same time, which I don't understand at
all. Why would I ever need that? So there are some things that the system really should do better, or
some things that it should constrain so the user doesn't do things that don't make sense in the context
of wanting to set the temperature.
Matching our interface design to users' mental models, is a valuable way to create interfaces that are
easily learnable by users. Here are five tips, or in this case, principles to leverage for creating learnable
interfaces. These principles of learnability were proposed by Dix, Finlay, Abowd and Beale, in their
book, Human-Computer Interaction. Number one, predictability. Look at an action. Can the user
predict what will happen? For example, graying out a button is a good way to help the user predict that
clicking that button, will do nothing. Number two, synthesizability. Not only should the user be able to
predict the effects of an action before they perform it, they should also be able to see the sequence of
actions that led to their current state. That can be difficult in graphical user interfaces, but something
like the log of actions that they can see in the undo menu can make it easier. Command line interfaces
are actually good at this, they give a log of commands that have been given in order. Number three,
familiarity. This is similar to Norman's principle of affordances. The interface should leverage actions
with which the user is already familiar from real world experience. For example, if your trying to
indicate something is either good or bad. You'd likely want to use red and green instead of blue and
yellow. Number four, generalizability. Similar to familiarity and to Norman's principle of consistency,
knowledge of one user interface should generalize to others. If your interface has tasks that are similar
to other interface's tasks, like saving, and copying, and pasting, it should perform those tasks in the
same way. Number five, consistency. This is slightly different than Norman's principle of consistency.
This means that similar tasks or operations within a single interface, should behave the same way. For
example, you wouldn't want to have Ctrl+x cut some text if text is selected, but close the application if
there is no text selected. The behavior of that action, should be consistent across the interface. Using
these principles can help the user leverage their existing mental models of other designs, as well as
develop a mental model of your interface as quickly as possible.
The most powerful tool in our arsenal to help ensure users have effective mental models of our systems
is representation. We get to choose how things are visualized to users, and so we get to choose some
of how their mental model develops. Using good representations can make all the difference between
effective and ineffective mental models. So to take an example, let's look at some instructions for
assembling things. So here are the instructions for a cat tree that I recently put together. At first, I
actually thought this was a pretty good representation. You can kind of see how things fit together from
the bottom to the top. The problem is that this representation doesn't actually map the physical
construction of the cat tree itself. You can even see some bizarre mismatches even within this
representation. Up here, it looks like this pillar is in the front, but the screw hole that goes into it is
actually shown in the back. But we're not looking up into the bottom of that piece, because in the
middle piece, they actually map up pretty well. At least with the way they're shown here. Again, that
isn't the way the actual piece works. So anyway, the point is, this is a poor representation for the way
this furniture actually worked, because it wasn't a real mapping between this representation and the
real pieces.
A good representation for our problem will make the solution self-evident. Let's take a classic example
of this. A hiker starts climbing a mountain at 7 AM. He arrives at the cabin on top at 7 PM.
The next day, he leaves the cabin at 7 AM and arrives at the bottom at 7 PM.
Let's watch that animation again. The hiker goes up the hill on one day, stays the night. And then goes
back down the hill the next day. And we want to know, was the hiker ever at the same point at the
same time on both days? And the answer is yes. Describe the way we describe it right here, it might
actually seem odd that there is a point where the hiker is in the same place at the same time on both
days. That seems like a strange coincidence, but what if we tweak the representation a little bit?
Instead of one hiker going up and then coming down the next day. Let's visualize the two days at the
same time. If we represent the problem like this, we'll quickly see the hiker has to pass himself.
To show it again, we know the answer is yes, because there's a time when the hiker would have passed
himself if he was going in both directions on the same day. And to pass himself, he has to be in the
same point at the same time. That representation took a hard problem and made it very easy.
For simple problems, identifying a good representation can be easy. But what about for more complex
problems? For those problems, we might need some examples of what makes a good representation.
So let's try a complex example. We'll use a problem with which you might be familiar. For now, I'll call it
the circles and squares problem. On one side of a table, I have three circles and three squares. My goal
is to move the three circles and three squares to the other side of the table. I can only move two
shapes at a time, and the direction of the moves must alternate, starting with a move to the right. The
number of squares on either side can never outnumber the number of circles unless there are no circles
at all on that side. How many moves does it take to accomplish this? Try it out and enter the number of
moves it takes in the box. Or just skip if you give up.
If you solved it, well done on figuring it out despite such a terrible representation. Or congratulations
on recognizing by analogy, that it's the same as a problem you've seen in another class. If you skipped
it, I don't blame you. It's not an easy problem. But it's even harder when the representation is so poor.
There were lots of weaknesses in that representation. Let's step through how we would improve it. The
first thing we could do is simply write the problem out. Audio is a poor representation of complex
problems.
Here we have the shapes, the three circles and the three squares. And we can imagine actually moving
them back and forth.
That arrow in the center, represents the direction of the next move.
But we can still do better. Right now we have to work to compare the number of squares and circles, so
let's line them up. This makes it very easy to compare and make sure that the circles always outnumber
the squares. And we can still do the same manipulation, moving them back and forth.
Now the only remaining problem is that we have to keep in working memory, the rule that squares may
not outnumber circles. There is no natural reason why need more squares than circles, it's just kind of
an arbitrary rule.
Finally, we can make this visualization even a little bit more useful, by actually showing the movements
between different states. That way we can see that for any state in the problem, there's a finite
number of next legal states. This would also allow us to notice when we've accidentally revisited an
earlier state, so we can avoid going around in circles. So for example, from this state, we might choose
to move the wolf and the sheep back to the left, but we'll immediately notice that would make the
state the same as this one. And it's not useful to backtrack and revisit an earlier state. So we know not
to do that. So these representations have made it much easier to solve this problem, than just the
verbal representation we started with.
What are the characteristics of a good representation? First, good representations make relationships
explicit. Laying things out like this makes it easy to tell that there are more sheep than wolves. Second,
good representations bring objects and relationships together. Representing these as wolves and
sheep, makes that relationship that the sheep must outnumber the wolves much more salient than
using squares and circles. That brings the objects together with the relationships between them. Third,
a good representation excludes extraneous details. For example, sometimes this problem is described
in the form of having a river and a boat. But those details aren't actually relevant to solving the problem
at all. So, we've left them out of here. All we need to know is they need to move from the left to the
right. Doesn't matter if it's a river, doesn't matter if it's a boat, this is all the information that we need.
So we left out the extraneous information. Fourth, good representations expose natural constraints.
We describe these as sheep and wolves because it makes it easier to think about the rule that wolves
may never out number sheep. Now of course, this isn't the best rule because we know that sheep can't
actually defend themselves against wolves. Three sheep and one wolf, the wolf would still win.
However, if we visualize these as guards and prisoners instead, it involves holding and working memory
the idea that prisoners inexplicably won't flee if they're left without any guards. So personally, I think
the wolves and sheep metaphor is better. But perhaps the original name of the problem is even better.
This was originally described as the cannibals and missionaries problem. It makes more sense that a
missionary could defend themselves against a cannibal than a sheep could defend themselves against a
wolf. But the cannibals and missionaries problem makes it a little bit dark. So let's stick with sheep and
wolves.
So let's take an example of redesigning a representation to create a better mapping with a task. Here
we have my circuit breaker.
On the left we have a list of breakers, on the right we have what they actually control. To reset a
breaker I need to go down the list on the left, find the one I want, count down on the right to find the
right breaker, and switch it. How can we make this representation of what each breaker corresponds to
better?
There are a number of things we can do here. The simplest change we could make would simply be to
make the breakers themselves writable. Instead of writing a list on the left that we have to then map
up to the breakers themselves on the right, we could just write on each breaker what it controls. That
way we just have to look at the breakers themselves to find the breaker that we're interested in. But
Representations are all around us in the real world, but they play a huge role in interfaces. Designing
representations of the current state of a system is actually one of the most common tasks you might
perform as an interface designer. So let's take a look at a few, here's Google Calendar which is a
representation of my week. Notice how it actually uses space to represent blocks of time. It allows me
to quickly feel how long different things are going to take.
An alternate visualization might show an entire month instead of a week, but it would lose those
indicators that linked the individual appointments. So it doesn't really represent the structure and pace
of my day, the way the weekly calendar does. This representation also allows me to very easily find
conflicts in my schedule. So I know when I might need to reschedule something. On Friday I can see
that I have a conflict for one of my meetings. And this interface also makes it easy to reschedule.
Another example of this is the PowerPoint animation pane. The numbers here represent when different
animations happen concurrently. The middle icon represents what triggers the animation, and the
right icon indicates the general nature of the animation. Whether it's a movement, a highlight or an
appearance. The PC version of PowerPoint makes this even better by actually showing you a timeline of
the different animations to the right. That lets you very easily visualize when two different things are
going to happen at the same time. Or when something waits for something else to happen. These are
just two of the many many representations you use whenever you use a computer. Scroll bars for
example, are representations of your relative position in a document. Highlighting markers like that
rectangle are representations of what you currently have selected. All these representations work
together to help your mental model match the real state of the system. Representations when used
correctly can make many tasks trivial, or even invisible. And we as interface designers have a lot of
control over representations in our designs.
Analogies and metaphors are powerful tools for helping users understand your interface. If you can
ground your interface in something they already know, you can get a solid foundation in teaching them
how to use your interface. For example, the Wall Street Journal's website heavily leverages an analogy
to the Wall Street Journal print edition. The headlines, the grids, the text all appear pretty similarly, so
someone familiar with the print edition could pretty easily understand the online edition.
If you've ever tried to explain a tool that you use to someone that's never seen it, you've probably
encountered something like this. For example, both at Udacity and in the Georgia Tech OMSCS
program, we use Slack for communicating with each other. If you've never actually seen Slack, it's a
chat app for organizations to talk in different public and private rooms. But listen to what I just said, it's
a chat app. In my description, I leveraged an analogy to something you've already seen. Now, Slack is a
pretty easy example, because it is a chat app. It's barely even an analogy to say it's like a chat app,
because it is a chat app. It's a very full-featured chat app with a lot of integrations and things like that,
but it's fundamentally a chat app.
One of the challenges encountered by every new technology is helping the user understand how to use
it. Smartphones maybe pretty ubiquitous by now but we're still figuring out some elements of how to
best use these things. Typing efficiency on a touch screen, for example, still hasn't caught up to
efficiency with a full keyboard. But that's because typing on a phone was also a pretty straightforward
transition from a regular keyboard, because the onscreen keyboard was designed just as an analogy to
the real one. There are probably more efficient ways to enter text into a phone, but they wouldn't be
as easily learnable as this straightforward analogy to a physical keyboard. This illustrates both the
positive and negative sides of using analogies in designs.
Analogies make the interface more learnable but they also may restrict the interface to outdated
requirements or constraints. Take a moment and think about how this applies to your chosen area of
HCI. If you're looking at things like gestural interfaces or touch-based interfaces, what analogies can
you draw to other interfaces to make your designs more learnable? At the same time, what do you risk
by using those analogies?
In our lesson on design principles, we touch on a number of principles that are relevant to these ideas
of mental models, representations, and metaphors. First, the idea that people reason by analogy to
pass interfaces, or by metaphors to the real world, is one of the reasons that the principle of
consistency is so important. We want to be consistent with the analogies and metaphors that people
use to make sense of our interfaces. Second, when we say that an interface should teach the user how
the system works, we're echoing the idea of affordances. The way the system looks, should tell the
user how it's used. Just by observing the system the, user should be learning how to interact with it.
Third, representations are important because they map the interface, to the task at hand. A good
representation is one that users can use predict the outcomes of certain actions. In other words, a
good representation lets users predict the mapping between their actions in the interface, and the
outcomes out in the world.
In designing interfaces, we want to leverage analogies to the real world, and principles from past
interfaces whenever possible, to help the user learn the new interface as quickly as they can. But
there's a challenge here. Why are we designing technology if we're not providing users anything new?
It's one thing to take the technology they're already using, and make it more usable. But generally, we
also want to enable people to do things they've never done before. That means there are no analogies,
no expectations, no prior experiences for them leverage. How do you tell someone that's used to
control their own thermostat that they don't need to anymore. So, while we need to leverage analogy
and prior experience wherever possible. We also need to be aware that eventually, we're going to do
something interesting, and they're going to break down. Eventually, we're going to have to teach the
user to use the unique elements of our interface.
Every interface requires a user to do some learning to understand how to use it. Very often, we
visualize this as a learning curve. A learning curve plots expertise against experience. Generally, as the
user gains more experience, they also gain more expertise. Here, our user starts with no experience at
all, and so they also have no expertise at all. Our goal is for them to end with an expertise above this
line of proficiency. However, the shape and steepness of this curve can vary.
Ideally, we want a learning curve that grows quickly with a relatively little experience. This is actually
what we call a steep learning curve, although usually when you hear steep learning curve, it means the
exact opposite. Technically, steep is good because steep means we're increasing very quickly with
relatively little experience. People often use steep to mean the opposite, and that's because steep calls
to mind connotations of a high difficulty level, like climbing a steep mountain. So steep is actually a
poor representation of this concept. So instead, let's for us call this a rapid learning curve, which means
that expertise grows very quickly with relatively little experience. Rapid calls to mind probably the
proper connotation that a rapid learning curve is rapid learning, which is probably something we want.
For one, if we're consistent with existing conventions and use analogies that users understand, we can
actually start them off with effectively some initial expertise.
For example, when you download a new smartphone app, you know that the three horizontal lines that
often appear in the top-right, likely indicate a menu. That's a consistent convention used across
multiple apps. And so using it means that when users open your app, they already have some
As we design interfaces, we will no doubt encounter instances where the user makes mistakes.
Sometimes this might be because our users are stressed or distracted. But other times it might be
because our users fundamentally don't understand our interfaces or don't understand their own goals.
As the designers, though, we know that there's really no such thing as user error. Any user error is a
failure of the user interface to properly guide the user to the right action. In designing interfaces, there
are two kinds of user error that we're interested in avoiding. The first are called slips. Slips occur when
the user has the right mental model but does the wrong thing anyway.
Take this box, for example, prompting a user closing a program on whether they'd like to save their
work. In all likelihood, the user probably knows exactly what they'd like to do, and typically it's that
they're going to want to save their work. If you ask them to explain what they should do, they would
say "click yes." But imagine if the order of these buttons was flipped. If the "No" was on the left and the
"Yes" was on the right. A user might click on the left just because they're used to seeing yes on the left,
even though they know they really want to click yes. Or, imagine that no was selected by default, so
that if the user just presses enter when this dialog comes up it automatically says no. In that case also,
they knew that they wanted to save their work but what they did, didn't match the goal they wanted to
accomplish.
Don Norman further divides slips into two different categories. He describes action-based slips and
memory lapse slips. Action-based slips are places where the user performs the wrong action, or
performs a right action on the wrong object, even though they knew the correct action. They might
click the wrong button, or right-click when they should left-click. A memory lapse slip occurs when the
user forgets something they knew to do. For example, they might forget to start a timer on a
microwave. They knew what to do, they just forgot about it. So action-based slips are doing the wrong
thing, and memory lapse slips are forgetting to do the right thing. In this dialog, clicking No when you
mean to click Yes would be an example of an action-based slip. The very existence of this dialog is
meant to prevent a memory lapse slip, where a user would forget to save their work before closing.
Norman also divides mistakes in the multiple categories, in this case, three categories. Rule based
mistakes, knowledge base mistakes and memory lapse mistakes. Rule based mistakes occur where the
user correctly assesses the state of the world but makes the wrong decision based on it. Knowledge
based mistakes occur where the user incorrectly assesses the state of the world in the first place.
Memory lapse mistakes are similar to memory lapse slips, but this focuses on forgetting to fully execute
a plan not just forgetting to do something in the first place. If the user clicks the wrong button in this
dialog, it could be due to multiple different kinds of mistakes. Maybe they correctly knew they wanted
to save their changes but they didn't realize that clicking no is actually what would save, that would be
a rule-based mistake. They knew they wanted to save, but they made the wrong decision based on that
knowledge. Or perhaps they didn't even realize they wanted to save in the first place. Maybe they
didn't think they made any changes, when in actuality they did. That would be a knowledge based
mistake. They applied the right rule based on their knowledge but their knowledge was inaccurate. If
they were to shut down their computer and never come back and answer this dialogue in the first
place, that might be considered a memory lapse mistake. They didn't fully execute the plan of closing
down the application. So in our designs, we want to do everything we can to prevent all these different
kinds of errors. We want to help prevent routine errors by leveraging consistent practices like designing
dialogues the way users are used to. We also want to let our interface off load some of the demands on
working memory from the user to the computer to avoid memory lapse errors. And we want the
leverage good representations to help users develop the right mental of models to minimize these rule-
based and knowledge-based errors. And while errors are inevitable, we should make sure to leverage
the tolerance principle to make sure the repercussions can never be too bad.
When you're looking to improve an interface, user errors are powerful places to start. They're
indicative either of weaknesses in the user's mental model or places where the system isn't capturing
the user's correct mental model. So let's try to address an error Morgan's encountering. Morgan
usually texts with her boyfriend but she texts with some other people too. But she finds she's often
sending the wrong messages to the wrong people. The app by default brings up the last open
conversation and usually that's her boyfriend. But sometimes it's someone else and she accidentally
messages them instead. First, is this a slip or is this a mistake?
I would argue this is a slip. Morgan knows who she means to message but the phone's behavior tricks
her into sending things to the wrong people. What's more, this might be either an action based slip or
memory lapse slip. Maybe Morgan is tapping the wrong person, or maybe she's forgetting to check
who she's messaging. So take a second and brainstorm a design for this that can prevent this from
happening in the future without over complicating the interaction too much. I would argue that the
best way to do this is simply to show more pervasive reminders of who Morgan is currently texting. We
could show the recipient's picture on the send button, for example. That way, the interaction is no
more complex, but Morgan also has to directly acknowledge who she's messaging to send a message.
The feedback cycle on HCI is reliant on a relationship between the user’s input and the interface’s
output. The idea of this cycle is that the user learns from the output what they should have input. If
they encounter an error, they receive feedback on how to avoid that next time. If they do something
correctly, they see that the goal was accomplished. That's the principle of feedback.
But what happens when there is no discernible interaction between the input and the output? What
happens when there's a break in this cycle? What happens when the user acts in the system over and
over and over again, but never receives any output that actually helps? What if they never even receive
output that indicates that the computer is understanding them or receiving input from them? That’s
when something called learned helplessness sets in.
The human working with the interface learns that they’re helpless to actually use the system. They
learn that there is no mapping between their input and the output that they receive. And as a result,
Just like mental models, learned helplessness is also a topic related as much to education as it is to HCI.
If you've ever spent any time in a teaching role, you very likely encountered students that are very
resistant to being taught. And the reason is they have learned that no matter what they do, they never
succeed. They've learned to be helpless based on their past experiences. In all likelihood, there have
actually been situations where you've been the one learning that you're helpless. In fact, if you're a
parent, I can almost guarantee you've been in that situation. There are times when your child was
crying and inconsolable and you had no clue why. We had one of those right before we filmed this
video. Nothing you did helped. And you learned that you were helpless to figure out what your child
wanted. So if you're a parent and you're dealing with learned helplessness as an interface designer,
just imagine that you are the user and the interface is your screaming child. What feedback would you
need from your child to figure out how you can help them? And how can you build that kind of
feedback into your interface? [SOUND]
Generally, when we're developing interfaces, we're going to be experts in those domains. It's rare that
you design an interface to help people do something that you yourself don't know how to do. But as a
result, there's risk for something called expert blind spot. When you're an expert in something, there
are parts of the task that you do subconsciously without even really thinking about them. For example,
a professional basketball player knows exactly where to place their hands on the ball when taking a
shot. I know exactly what to do when I walk in the studio. Amanda knows exactly what to do when she
gets behind the camera. And yet, if we were suddenly asked to train someone else, there are lots of
things we'd forget to say or lots of things we would assume would just be obvious. That's exactly what
you're doing when you're designing an interface. You're teaching the user how to use what you've
designed. You're teaching them without the benefit of actually talking to them, explaining things to
them, or demonstrating things for them. You're teaching them through the design of the interface. So,
you have to make sure that you don't assume that they're an expert too. You have to overcome that
expert blind spot because we are not our users. We are not the user. That can be the motto of all of
HCI. I am not my user. Say it with me, I am not my user. One more time, I am not my user. Now type
it.
In order for us to really sympathize with users suffering from the effects of learned helplessness and
expert blind spot, it's important for us to understand what it's like to be in that position. We've all
experienced these things at some point in life, although at the time, we might not have understood
what was happening. So take a second and reflect on a time when you experienced learned
helplessness and the effects of expert blind spot from someone trying to teach you something. It might
have been in a class, it might be learning a new skill, or it might be doing something that everyone else
seems to do just fine day to day, but for whatever reason, you've always struggled with.
The fact that I'm filming this in the kitchen probably tells you where I experience this. Anything related
to cooking, I feel completely helpless. I've given myself food poisoning with undercooked meat lots of
times, I once forgot to put the cheese on a grilled cheese sandwich. I accidentally made toast, and it
wasn't even good toast. And I've always heard, it's just so easy, just follow the recipe, but no, it's not
that easy, because many recipes are written for experts. So for example, here's a recipe from my wife's
cookbook. It calls for a medium saucepan. Is this a medium saucepan? I have no idea. Calls for one egg
In this lesson, we talked about mental models. We discussed what mental models are and how the user
uses them to make sense of a system.
We discussed how good representations can help users achieve strong models.
We then discussed learned helplessness which can come from giving poor feedback on user errors.
[MUSIC] When looking at human computer interaction, we're really looking at the tasks that users
perform. We look at the tasks that they're performing now and we try to restructure those tasks to be
more efficient using new interfaces. In all of this, the task is at the heart of the exercise. What task are
they performing?
So today, we're going to talk about two methods for formally articulating the tasks that people are
completing.
Second, we'll discuss cognitive task analysis. A way of trying to get inside the users head instead of
focusing just on the input and the output. Note that that's similar to the predictor model of the user
that we also discuss elsewhere.
The GOMS model is a human information processor model so it builds off the processor model of the
human's role in a system. The GOMS model gets its name from the four sets of information it proposes
gathering about a task. G, stands for the users Goals in the system. O, stands for the Operators the user
can perform in the system. M stands for the Methods that the user can use to achieve those goals. And
S stands for the Selection rules that the user uses to choose among different competing methods. So
the GOMS Model proposes that every human interact with the system has a set of Goals that they want
to accomplish. They have sent methods that they can choose from to accomplish those goals. Each of
those methods is comprise of a series of Operators that carries out that method. And they have some
Selection rules that help them decide what method to use and when.
The GOMS model is often visualized like this. The user starts with some initial situation, and they have a
goal in mind that they want to accomplish, so they apply their selection of rule to choice between
different competing methods to accomplish that goal. Once they've chosen a method, they execute
that series of operators and makes that goal a reality
We can take the GOMS model and apply it to a number of different domains. So let's take the example
of needing to communicate a message to a coworker.
We have an initial situation, which is the need to transfer information to a coworker. That carries with
it the implicit goal of the information having been transferred. We might have a number of different
methods in mind for how we could do that. We could email them, we could walk over and talk to them
in person. And we also have some selection rules that dictate how we choose amongst these methods.
If what we need to transfer is very time-sensitive, maybe we walk over and talk to them in person or
call them on the phone. If the information we need to transfer is complex and detailed, maybe we
write them an email. Or if it's more casual, maybe we chat with them or text them. No matter what
method we choose, we then execute the series of operators that carries out that method, and the
result is our goal is accomplished, the information has been transmitted.
Let's try this out. We're going to watch Morgan enter the house and undo her security system two
different ways. After you watch the video, try to outline Morgan's goals, outcomes, methods, and
selection rules. [MUSIC] Now try to outline the goals, outcomes, methods and selection rules for these
two methods of disabling the security system.
There are strengths and weaknesses to the GOMS representation for tasks. One weakness is that it
doesn't automatically address a lot of the complexity of these problems. For example, there are likely
many different methods and sub-methods for addressing this goal. Before even getting to selection
rules among what route to take, you might decide whether to take public transportation or whether to
work from home that day. In parallel to that, even after deciding to drive, you might decide what car to
take if your family has more than one car. The standard GOMS model leaves those kinds of things out,
although there are augmented versions that have been created to deal with this kind of complexity, like
CMN-GOMS and NGOMSL. We'll talk about those a bit more later.
A second weakness is that the GOMS model assumes the user already has these methods in mind. That
means the user is already an expert in the area. GOMS does not do a good job of accounting for novices
or accounting for user errors. For example, if you're driving in an unfamiliar location, you don't even
know what the methods are, let alone how to choose among them.
There are several varieties of GOMS models, these varieties share the commonality of goals operators,
methods, and selection criteria, but they differ in what additional elements they provide. Bonnie John
and David Kieras covered four popular variations in a paper from 1996. The first is the Vanilla GOMS
we've talked about so far. And the other three, are KLM GOMS, CMN GOMS, and NGOMSL. Let's talk
about what those acronyms actually mean.
They start with the Keystroke-Level Model, which is the simplest technique. Here, the designer simply
specifies the operators and execution times for an action, and sums them to find the complexity of an
interaction. This method proposed six different types of operators, although for moderate interfaces,
we would need some new ones to cover touch screens and other novel interfaces.
For example, here we see a hierarchy of GOALs, as well as the ability to choose between multiple GOALs
in different areas. Notice also the level of granularity behind these GOMS models. The GOALs go all the
way down to little GOALs, like moving text or deleting phrases. These are very, very low-level GOALS.
A third variation is called Natural GOMS Language. Natural GOMS language, or NGOMSL, is a natural
language form of GOMS that lends itself to human interpretation. In all these cases, the important point
of emphasis is the way that these models allow us to focus in on places where we might be asking too
much of the user.
Here are five quick tips for developing GOMS models. Number 1, focus on small goals. We've used
some pretty big examples, but GOMS was really designed to work in the context of very small goals, like
navigating to the end of a document. You can abstract up from there, but start by identifying smaller,
moment to moment goals. Number 2, nest goals instead of operators. It's possible to nest goals. For
example, in our GOMS model of navigation, we could develop it further and break the overall task of
navigating down to smaller goals like changing lanes or plotting routes. Operators, however, are the
smallest atoms of a GOMS model. They don't break down any further and those must be the actual
actions that are performed. Number 3, differentiate descriptive and prescriptive. Make sure to identify
whether you're building a model of what people do or what you want them to do. You can build a goal
model of what people should do with your interface. But you shouldn't trick yourself into thinking that's
necessarily what they will do. Number 4, assign costs to operators. GOMS was designed to let us make
predictions about how long certain methods will take. The only way we can do that is if we have some
measurement of how long individual operations take. Usually this is time, but depending on the
domain, we might be interested in phrasing the cost differently as well. Number 5, use GOMS to trim
waste. One of the benefits of GOMS is it lets you visualize where an unnecessary number of operators
are required to accomplish some task. That's bolstered by the costs we assign to those operators. So
use GOMS to identify places where the number of operators required can be simplified by the interface.
GOMS models are human information processor models. This method largely assumes the human is an
input output machine, and it doesn't get too much into the internal reasoning of the human. Instead, it
distills their reasoning into things that can be described explicitly like goals and methods. Some would
argue, myself included, that human reasoning is actually too nuanced and complex to be so simplified.
They, or we, advocate other models to get more into what goes on inside the user's head. That's where
cognitive task analysis comes in. Cognitive task analysis is another way of examining tasks, but it puts a
much higher emphasis on things like memory, attention, and cognitive load. Thus, cognitive task
analysis adopts more of the predictor view of the human's role in the system.
This conflict between more processor-oriented and more predictor-oriented models of the user actually
gets at the core of an old battle in psychology between behaviorism and cognitivism. Behaviorism
emphasized things that could be observed. We can see what input a person is receiving. We can see
the output they're producing. And that might be all we need to understand the design of things.
Cognitivism, on the other hand, suggests we can and should get into the mind of what people are
actually thinking and how systems like memory and learning and perception actually work. So take a
moment and reflect on what you think about this. When designing interfaces, how much attention
should you devote to observable goals, operators and methods? And how much do you devote to
understanding internal thought processes, like cognition, learning, and memory?
You can probably guess my bias on this issue, given that I've already badmouthed the processor model
and I also teach cognitive systems. So I'm generally going to prefer methods that focus on cognition. I
Cognitive Task Analysis is not really a single method, but it's more of a type of method to approaching
the evaluation of how people complete tasks. Performing a cognitive task analysis involves a number of
different techniques and methods that we'll discuss more when we discuss the design life cycle. For
right now, though, we’re interested in what kinds of information we’re trying to gather, not how we’re
gathering it. Cognitive task analyses are especially concerned with understanding the underlying
thought process in performing a task. Not just what we can see, but specifically what we can’t see.
There are a lot of different methods for performing cognitive task analyses, but most methods follow a
particular common sequence.
First, we want to collect some preliminary knowledge. While we as interface designers don’t need to
become experts in a field, we need a good bit of familiarity with it. So, we might observe people
performing the task, for example. In navigation, for example, we might just watch someone driving and
using a GPS.
Our second step is to identify knowledge representations. In other words, what kinds of things does the
user need to know to complete their task? Note that we’re not yet concerned with the actual
knowledge they have, only the types or structures of the knowledge that they have. For example, we
want to know: does this task involve a series of steps in a certain order? Does it involve a collection of
tasks to check off in any order? Does it involve a web of knowledge to memorize? For navigation, for
example, we would identify that the structure of the knowledge is a sequence of actions in order, as well
as some knowledge of things to monitor as we go.
Then, we analyze and verify the data we acquired. Part of that is just confirming with the people we
observed that our understanding is correct. We might watch them do something and infer it's for one
reason when in reality it's for a very different reason. So we want to present to our users our results
and make sure that they agree with our understanding of their task. Then, we attempt to formalize it
into structures that can be compared and summarized across multiple data-gathering methods.
And finally, we format our results for the intended application. We need to take those results and
format them in a way that's useful for interface design. We want to develop models that show what the
user was thinking, feeling, and remembering at any given time and make those relationships explicit.
The result might look something like this:
Here we see a very high-level model of the process of driving to a destination. What’s interesting to
note is that these tasks in the middle are highly cognitive rather than observable. If I had no knowledge
about driving and sat in a passenger’s seat watching the driver, I might never know that they’re
monitoring their route progress or keeping an eye on their dashboard for how much fuel they have left.
If you have kids you may have experienced this personally, actually. To a kid sitting in the back seat,
Mommy or Daddy are just sitting in the driver's seat just like they're sitting in the passenger's seat. They
don't have a full understanding of the fact that you have a much higher cognitive load and you're doing
a lot more things while you're driving than they are. That's because what you're doing is not observable.
It's all in your head. So to get at these things I might have the user think out loud about what they’re
doing while they’re going it. I might have them tell me what they're thinking while they're driving the
car. That would give me some insights into these cognitive elements of the task.
Cognitive task analysis advocates building models of human reasoning and decision-making in complex
tasks. However, a challenge presented here is that very often, large tasks are actually composed of
many multiple smaller tasks. We can see this plainly present in our cognitive model of driving. These
tasks are so high-level that it’s almost useless to describe driving in these terms. Each part could be
broken down into various sub-tasks like iteratively checking all the cars around you or periodically
checking how long it is until the next turn needs to be made. What’s more, these smaller tasks could
then be used in different contexts. Route-monitoring, for example, isn’t only useful while driving a car --
it might be useful while running or biking or while riding as a passenger. Traffic monitoring might be
something an autonomous vehicle might do, not just the human user. So, the analysis of a task in a
particular context could be useful in designing interfaces for other contexts if we break the analysis
down into subtasks. So let’s take a simple example of this.
Here’s a somewhat simple model of the act of buying something online. Notice that a lot of the tasks
involved here are general to anyone shopping on any web site, and yet every web site needs to provide
all of these functions. As a side note, notice also the interesting analogy going on with the top two.
Anyhow, if we treat this cognitive task analysis more hierarchically we can start to see a well-defined
subtask around this checkout process. Every online vendor I've ever encountered has these steps in its
checkout process. Now because this is so well-defined, we could leverage existing tools, like existing
payment widgets or something like PayPal. This hierarchical task analysis helps us understand what
tools might already be available to accomplish certain portions of our task or how we might design
certain things to transfer between different tasks and different contexts. Hierarchical task analysis also
lets the designers of the site abstract over this part of the process and focus more on what might make
their particular site unique. This type of task analysis is so common that you generally will find tasks and
subtasks whenever you’re looking at the results of a cognitive task analyses. So it’s important to
remember the strengths supplied by this hierarchy: abstracting out unnecessary details for a certain
level or abstraction, modularizing designs or principles so they can be transferred between different
tasks or different contexts, and organizing cognitive task analysis in a way that makes it easier to
understand and reason over. Lastly, it’s also important to note that the cognitive and hierarchical task
analyses we’ve shown here are extremely simplistic, mostly, honestly, because of limited screen real
estate. When you’re creating real cognitive models, you’ll likely have several levels of abstractions,
several different states, and additional annotating information like what the user has to keep in mind or
how they might be feeling at a certain stage of the analysis. We’ll put some examples of some good,
thorough models in the notes.
Let’s watch the videos of Morgan disabling her security system again. This time, though, let’s try to
approach this from a more cognitive task analysis perspective. We won’t be able to do that fully
because doing a full cognitive task analysis means interviewing, asking the user to think out loud, and
more, but we can at least try out this approach. Remember, in doing a cognitive task analysis for a task
like this, your goal is to build a model of the sequence of thoughts going on inside the user's. Pay special
attention to what she needs to remember at each step of the process.
What we saw here was that to get inside and disable the alarm, there was a sequence of actions that
had to be completed, but some of them could be completed in different orders. If she used the keypad,
she had to first unlock the door and then open the door. Then she could either disable the alarm on the
keypad or close the door. And after closing the door, she could re-lock the door, though she could also
do that before disarming the alarm. So there's some choices there. With the keychain, the sequence of
tasks related to the door remain the same, but she had the option of disarming the alarm before even
entering. However, that required remembering to do so. When using the keypad, she didn't have to
remember because the alarm beeps at her until she turns it off. But she has to remember the key code.
Performing these cognitive task analyses gives us the information necessary to evaluate different
approaches and look for areas of improvement. For example, if she can disable the alarm just by
pressing the keychain button, why does she need to press it at all? Why doesn't it just detect that she's
coming in with a keychain in her pocket?
Just like GOMS models, cognitive task analysis also have some strength and some weaknesses. One
strength is that they emphasize mental processes. Unlike the GOMS model, cognitive task analysis puts
an emphasis on what goes on inside the users head. It's thus much better equipped to understand how
experts think and work.
The information it generates is also formal enough to be used for interface design, for comparison in
mode alternatives and more. There are disadvantages though.
A second weakness is that cogni-task analysis risk deemphasizing context. In zooming in on the
individual's own thought processes, cogni-task analysis risks deemphasizing details that are out in the
world. Like the role of physical capabilities or interactions amongst different people, or different
artifacts.
GOMS and cognitive tasks analysis are just two of the many alternatives to understanding how users
approach tasks. More in line with the human information processor models, there exist models like
KLM, TLM, and MHP, which capture even finer grain actions for estimating performance speed. There
are other extensions to GOMS as well that add things like sub goals, or other ways of expressing
content like CPM-GOMS and NGOMSL. CPM-GOMS focuses on parallel tasks, while NGOMSL provides a
natural language interface for interacting with GOMS models. More on the lines of cognitive models,
there exists other methods as well like CDM, TKS, CFM, Applied Cognitive Task Analyses, and Skill-Based
Cognitive Task Analyses. CDM puts a focus on places where critical decisions occur. TKS focuses on the
nature of humans' knowledge. CFM focuses on complexity. ACTA and Skill-Based CTA are two ways of
gathering the information necessary to create a cognitive model. There also exists other frameworks
more common in other disciplines, for example, production systems are common to an artificial
intelligence. But they're intended to model cognitive systems the same way these cognitive models do.
So we can apply production systems here as well and attempt to prescribe rules for users to follow.
Every possible application of HCI involves users completing some sort of task. That task might be
something within a domain. In educational technology, for example, that task might be learning how to
do a certain kind of problem. If your area is more technological, the task might be something the user is
doing through your application, like using virtual reality and gesture recognition to sculpt a virtual
statue. Take a moment and try to think of the kinds of tasks you might be interested in exploring in
your chosen application area. Do they lend themselves more to an information processor model like
GOMS? Or to cognitive models like hierarchical task analysis? And how can you tell?
Today we've talked at length about two general methods for approaching task analysis.
One, the GOMS family of approaches tries to distill tasks down to their goals, operators, methods and
selection rules.
[MUSIC] In discussing a human-computer interaction, there's often a tendency to look narrowly at the
user interacting with the computer. Or slightly more broadly at the user interacting with the task
through some computer. But many times we're interested in zooming even further out. We're
interested, not only in the interaction between the human, the computer and the task, but also in the
context in which that interaction takes place. So today we're going to look at four different models or
theories, of the context surrounding ACI.
We'll also touch on three other significant theories, social cognition, situated action, and activity theory.
Cognition on its own is interested in thought processes and experiences and we naturally think of those
as occurring inside the mind. But distributed cognition suggests that models of the mind should be
extended outside the mind. This theory proposes expanding the unit we use to analyze intelligence
from a single mind to a mind equipped with other minds and artifacts and their relationships among
them.
Okay can I do that in my head? No, I honestly can't even remember the numbers you just read to me.
But I have a pen and paper her, and using those, I can pretty easily write down the numbers. So give
those numbers to me again.
Okay, and using that I can now do the calculation by hand, and the answer is 7675.
Now, did I get smarter when I grabbed the pen and paper? Not really, not by the usual definition of
“smarter” at least. But the system comprised of myself, the pen, the paper is a lot more than just my
mind on its own. The cognition was distributed amongst these artifacts. Specifically, the paper took
care of remembering the numbers for me and remembering and tracking my progress so. So, instead of
adding 1238 plus 6437, I was really just adding 8+7, 3+3, 2+4 and so on.
One of the seminal works in distributed cognition research is a paper in the Journal of Cognitive Science
from 1995 called “How a Cockpit Remembers its Speeds” by Edwin Hutchins. You might recognize
Edwin Hutchins' name from our lesson on direct manipulation and invisible interfaces. He was one of
the coauthors there along with Don Norman. It’s one of my favorite papers, in part simply because of
the very subtle change in emphasis in its title. We tend to think of ‘remembering’ as a uniquely human
or biological behavior. We describe computers as having memory, but we don’t usually describe
computers as remembering things. Remembering is more of a human behavior. But the paper title
twists that a little bit. It isn't the human, it' isn't the pilot remembering, it's the cockpit remembering.
What is the cockpit? The cockpit is a collection of controls, sensors, and interfaces, as well as the pilots
themselves. The paper title tells us that it's this entire system: the pilots, the sensors, the controls, and
the interfaces among them that do the remembering. The system as a whole, the cockpit as a whole is
remembering the speed, not just the human pilot or pilots inside the cockpit. No individual part in
isolation remembers what the system as a whole can remember.
In order to understand the application of distributed cognition to the cockpit, it's important for us to
first understand what challenge it's addressing. The technical reasons for this are a bit complex, and I
strongly encourage reading the full paper to get the full picture. But to understand the simplified
description I'll give here, here's what you need to know.
When a plane is descending for landing, there exists a number of different changes the pilots need to
make to the wing configurations. These changes are made at different speeds during the descent.
When the plane slows down to a certain speed, it demands a certain change to the wind configuration.
The speeds at which these configuration changes must happen differ based on a number of different
variables. So for every flight there's a unique set of speeds that must be remembered. That's why the
title of this paper is, How a Cockpit Remembers Its Speeds, Speeds, plural. It isn't just remembering
how fast it's going now, it's remembering a sequence of speeds at which multiple changes must be
made. The configuration changes to the wings must be made during the descent at narrowly defined
times. That creates a high cognitive load. Pilots must act quickly. And mistakes could mean the deaths
of themselves and hundreds of others. So how do they do this?
So we know that the cockpit has its pilots who are responsible for actually reasoning over things. But
that booklet forms that cockpits long term memory of different speeds for different parameters. Then,
prior to the descent, the pilots find the page from that booklet that corresponds to their current
parameters. They pull it out and pin it up inside the cockpit. That way, the sheet is accessible to both
pilots. And they're able to check one another's actions throughout. This becomes one form of the
cockpits short term memory, a temporary representation of the current speeds. At this point, we have
to attribute knowledge of those speeds to the cockpit itself. If we were to isolate either pilot, they
would be unable to say what the speeds are from memory, but without the pilots to interpret those
speeds, the card itself is meaningless. So it's the system of the entire cockpit, including the pilots, the
booklet and the current card that remembers the speeds.
This is like the working memory for the cockpit. The short-term memory stores the numbers in a way
that the pilots can reason over, but the speed bugs on the speedometer store them in a way that they
can very quickly just visually compare. They don't need to remember the numbers themselves, or do
any math. All they have to do is visually compare the speed bugs to the current position of the
speedometer. So what do we see from the system as a whole? Well, we see the long term memory in
the book of cards. We see a short term memory in the card they selected. We see a working memory in
the speed bugs on the speedometer. And we see decisions on when to make configuration changes
distributed across the pilots and these artifacts. No single part of this cockpit, not the pilots, not the
speed bugs, not the cards, could perform the action necessary to land a plane on their own. It's only
the system as a whole that does so. That's the essence of distributive cognition. The cognition involved
in landing this plane is distributed across the components of the system. This is a deeper notion than
Distributed cognition is deeply related to the idea of cognitive load. Recall the cognitive load refers to
your minds ability to only deal with a certain amount of information at a time. Distributed cognition
suggests that artifacts add additional cognitive resources. That means the same cognitive load is
distributed across a greater number of resources. Artifacts are like plugging extra memory into your
brain. Driving is a good example of this. Sometimes while driving, you're cognitive load can be very,
very high. You have to keep track of the other cars around you. You have to keep track of your own
speed to monitor your route planning. You have to make predictions about traffic patterns. You have to
pay attention to your own level of gasoline, or in my case, electric charge. You might be attending to
something in your car as well, like talking to your passenger, or keeping an eye on your child in the back
seat. It can be a big challenge. A GPS is a way of off-loading one of the tasks, navigation, on to another
system. And thus, your cognition, is now distributed between you In the GPS. Turn on cruise control
and now it's distributed across the car, as well. Your off loading the task of tracking your speed to the
car. Every task you also de-artifacts, decreases your own personal cognitive load.
Let's analyze a simple task from the perspective of distributed cognition. Here we see Morgan paying
some bills the old fashioned way. For each bill she pulls it off the pile, reads it, writes a check and puts
them together in a stack on the right. Where do we delineate this system? What are its parts?
We're interested in any part of the system that performs some of the cognition for Morgan. While the
chair, table, and light over head make this possible, they aren't serving any cognitive roles. Morgan
herself, of course, is, and two piles of bills are too. They are an external memory of what bills Morgan
has already paid, and what she still needs to pay. This way she doesn't have to mentally keep track of
what bills she has left to do. The bills themselves remember a lot of the information for her as well like
the amounts and the destinations they need to be sent to. What about the pen and checkbook? That's
when things start to get a little bit more tricky. The checkbook itself is part of the system because it
takes care of the record keeping task for Morgan. Checkbooks create carbon copies, which means
Morgan doesn't have to think about tracking the checks manually. The pen is a means of
communicating between these systems, which means it's part of our distributed cognition system as
well.
Something important to note is that distributed cognition isn't really another design principle.
Distributed cognition is more of a way of looking at interface design. It's a way of approaching the
problem that puts your attention squarely on how to extend the mind across artifacts. We can actually
view many of our design principles as examples of distributed cognition. So this is my computer, and
when I set this up, I wasn't thinking about it in terms of distributed cognition. And yet we can use
distributive cognition as a lens through which to view this design.
For example, I always have my calendar open on the right. That's a way of off loading having to keep
track of my daily schedule in working memory. It bugs me if I have a teleconference to attend or
somewhere I need to go. In fact I rely on this so much it gets me in trouble. It doesn't keep track of
where I need to be for a given meeting and if I fail to keep track of that in working memory I might end
up at home when I need to be at Georgia Tech. We can even view trivial things like a clock as an
example of distributed cognition that prevents me from having to keep track of the passage of time
Distributed cognition is a fun one to reflect on because we can take it to some pretty silly extremes. We
can go so far as to say that I don't heat up my dinner. The system compromised of myself and the
microwave heats it up. And I offload the need to track the time to cook on to my microwave's timer.
And that's a perfectly valid way of looking at things. But what we're interested in is places where
interfaces don't just make our lives more convenient. We're interested in places where they systems
comprised of us and interfaces are capable of doing more, specifically because those interfaces exhibit
certain cognitive qualities. The systems might perceive, they might remember, they might learn, they
might act on our behalf. In some way they're all floating a cognitive task from us. And as a result, the
system comprised of us and the interface, is capable of doing more. So reflect on that a bit, what is the
place where the system comprised of you and some number of interfaces is capable of doing more than
you alone? Specifically, because of the cognitive qualities that the interfaces possess.
Almost any interface on the computer can be analyzed from the perspective of distributed cognition but
right now, I'm most interested in my email. My email is an unbelievable extension of my long term
memory because whenever I see anything in email, I know I don't actually need to commit it to my own
long-term memory. It's there, it's safe forever, and if I ever need to find it again, I'll be able to find it.
Now, finding it might take some work sometimes, but rarely as much work as manually remembering it.
For me, I also mark messages as unread if I'm the one they're waiting on, or if I need to make sure I
come back to them. And so, my email is an external memory of both all my communications via email,
and tasks that are waiting on me to move forward.
Distributed cognition is concerned with how the mind can be extended by relations with other artifacts
and other individuals.
Because we're interface designers, we probably focus most of our time on the artifacts part of that.
After all, even though we're designing tasks, the artifacts are what we're actually creating that's out in
the world.
Before the days of GPS navigation, a different form of navigation assistance existed. It was your spouse
or your friend sitting in the passenger seat, reading a map and calling out directions to you.
At Udacity, we use a tool for managing projects called JIRA. It breaks down projects into multiple pieces
that can be moved through a series of steps and assigned to different responsible individuals. The
entire value of JIRA is that it manages distributing tasks across members of a team. Thus, when a
project is completed, it is completed by the system comprising the individual team members and JIRA
itself.
The social portion of distributed cognition is concerned with how social connections create systems that
can, together, accomplish tasks. So for example, you and your friend sitting in the passenger seat,
together form a system capable of navigating to a new destination without a GPS. But social cognition
is not only concerned with how social relationships combine to accomplish tasks. It's also concerned
with the cognitive underpinnings of social interactions themselves. It's interested in how perception,
memory, and learning relate to social phenomena. As interface designers, though, why do we care?
Well, in case you haven't noticed, one of the most common applications of interface design today
involves social media. Everything is becoming social. Facebook tries to tell you when your friends are
already nearby. Udacity tries to connect you with other students working on the same material as you.
Video games are increasingly trying to convince you to share your achievements and highlights with
your friends. And yet, often times, our interfaces are at odds with how we really think about social
interaction. Designing for this well involves understanding the cognitive underpinnings of social
relationships. My PlayStation, for example, has a feature for finding my real life friends, and then
communicating to them my gaming habits. But really, I probably don't want them to know how much I
might play video games. If I come unprepared for recording today, I certainly don't want Amanda to
know it was because I was playing Skyrim for six hours last night. So if we're going to design interfaces
that integrate with social interactions, we have to understand how social interactions actually work. So
an understanding of social cognition is very important if that's the direction you want to take.
Let's talk about challenge of designing for social relationships. I like to play video games. I'm friends
with people from work. So it's natural that I might want to play games with people from work. But at
the same time, my relationship with people from work isn't purely social. If they see I'm playing a
game, maybe they say, hey, David's got some free time. I should ask him to help me out with
something. Or if they see I spend a lot of time playing video games, maybe they more generally say
hey, David's got plenty of time to take on a new tasks. How do we design a social video gaming system
that nonetheless protects against these kinds of perceptions?
There are lot of creative ways we might tackle this problem. One might be a base social video game
relationship around something like tender. Tinder, if this is still around by the time you're watching
this, is a dating app were you express interest in another's in are only connected if they also express
interest in you. We can apply the same colonoscopy to video games. You can set it such that My
Contacts can't just look up my game playing habits. But if they're also playing or interested in playing,
they'll learn that I am playing as well. In terms of social cognition, that's kind of getting at the idea of an
in-group. Your behaviors are only seen by those who share them and thus are in no position to judge
them.
Like distributed cognition, situated action is strongly concerned with the context within which people
interact. But unlike distributed cognition, situated action is not interested in the long-term and
enduring or permanent interactions amongst these things. That’s not to say the theory denies the
existence of long-term memory, but it just has a different focus. Situated action focuses on humans as
improvisers. It’s interested not in the kinds of problems that people have solved before, but in the kinds
of novel, situational problems that arise all the time. So, for an example of this, this is the first time I’m
filming with my daughter on camera. I don’t know how she’ll act. I don’t have contingency plans set up
for how to react if she acts strangely or if she distracts me from my script. I’m just going to figure this
out as I go along. This is the kind of interaction that situated action is interested in. And that’s an
important view for us to hold as interface designers. While we like to think we’re in charge of
structuring the task for our users, in reality the tasks that we perform is going to grow out of their
interaction. We can try our best to guide it in certain directions, but until we actually get our hands on
it, the task doesn’t exist. The task of me filming with my daughter didn't exist until this moment. Once
we've got our hands on it, the task is what we do, not what we designed. So as users, when we use an
interface, when we actually do something, we're defining the task as we go along. So, there are three
takeaways here.
Two, we must understand that the task that users perform grows out of their interaction with our
interfaces -- we don’t define it.
Situated action gives us a valuable lens to examine issues of memory. We mention in our lessons on
memory and on design principles that recognition is easier than recall. People have an easier time
recognizing the right answer, or option when they see it rather than recalling it from scratch. That's in
part because memory is so context dependent. Recognition provides the necessary context to identify
the right option. Relying on recall, means there's little context to cue the right answer to the users
memory. Now I encountered an interesting example of the value of situated action a little while ago.
My mother just had surgery. And so I would often go over to help her out with things. And every time I
would go over, she'd have four, five favors to ask me. Inevitably I would forget a couple of those favors
and have to be reminded, but she would always remember. Why was she so much better able to
remember the favors than me? Does she just have a better memory? She didn't make a list. She didn't
write them down or anything like that. So the distributed cognition perspective doesn't find an external
memory being used or anything like that. My hypothesis from the perspective of situated action, is
that she has the context behind the tasks. She knows why they need to be done. She knows what will
happen if they aren't. For her, they're part of a broader narrative. For me, they're items on a list. I have
no context for why they're there. Or what would happen if they're undone. For her, they're easy to
remember because they're situated in a larger context. For me, they're difficult because they're
isolated.
Lucy Suchman's 1985 book, Plans and Situated Actions is the seminal book on the philosophy of
situated action. The book is a detailed comparison between two views of human action.
The first view, she writes, views the organization and significance of action as derived from plans. And
this is a model we very often adopt when developing interfaces. Users make plans and users carry out
those plans, but Suchman introduces a second view as well.
Later in the book, Lucy Suchman specifically touches on communication between humans and
machines. There's a lot more depth here as well, the key take away for us is to focus on the resources
available to the user at any given time. But I do recommend reading the book and this chapter for more
insights.
Activity theory is a massive and well developed set of theories regarding interaction between various
pieces of an activity. The theory as a whole is so complex that you could teach an entire class on it
alone. It predates HCI. And in fact, activity theory is one of the first places the idea of interacting
through an interface actually came from. In our conversations about HCI though, there are three main
contributions of activity theory that I'd like you to come away with. First, when we discuss designing
tasks and completing tasks through an interface, we risk missing a key component. Why? We could
jump straight to designing the task, but why is the user completing the task in the first place? That can
have significant implications for our design.
Activity theory generalizes our unit of analysis from the task to the activity. We're not just interested in
what they're doing, but why they're doing it and what it means to them. Our designs will be different,
for example, if users are using a system because they're required to or because they choose to. Notice
how this is similar to our discussion of distributed cognition, as well. In distributed cognition, we were
generalizing the unit of analysis from a person, to a system of people and artifacts. Here, we're
generalizing the unit of analysis from a task to an activity surrounding a task. In both ways, we're
zooming out on the task and the design space.
In 1996, Bonnie Nardi edited a prominent book on the study of context in human-computer interaction,
titled Context and Consciousness. The entire book is worth reading, but two papers in particular stand
out to me, both by Nardi herself. The first is a short paper that serves in some ways as an introduction
to the book as a whole. It's not a long paper, only four pages, so I highly recommend reading it. It
won't take you long. Here, Nardi outlines the general application of activity theory to HCI.
She notes that activity theory offers a set of perspectives on human activity and a set of concepts for
describing that activity. And this is exactly what HCI research needs as we struggle to understand and
describe context, situation and practice. She particularly notes that the theory is uniquely suited to
addressing some of the interesting issues facing HCI in 1996. And for that reason, it's also fascinating
to view from a historical perspective. Today we understand the role that context has grown to play,
especially with emerging technologies. It's fascinating to me to look back at how the community was
constructing that debate 20 years ago.
In this lesson we covered three theories on studying context in human computer interaction.
Distributed cognition, Situated action and Activity theory. If your having trouble keeping the three
straight though, Nardi has a great paper for you. From her volume context in consciousness. Nardi
wrote a comparison between the three philosophy is she titled, Studying Context: A Comparison of
Activity Theory, Situated Action Models, and Distributed Cognition.
She starts by giving a great one page summary of each of these three views, which would be really good
if you're having trouble understanding the finer points of these theories.
First, she notes the activity theory and distributive cognition are driven by goals. Whereas situated
action de-emphasizes goals for a focus on improvisation.
She goes on to summarize that situated actions says goals are constructed retroactively to interpret our
past actions.
So what makes them different? Well, Nardi writes that the main difference between activity theory and
distributed cognition is their evaluation of the symmetry between people and artifacts.
Activity theory regards in this fundamentally different, giving that humans have consciousness.
Distributed cognition is a perspective on analyzing systems that helps us emphasize the cognitive
components of interfaces themselves. It helps us look at things we design as extensions of the user's
own cognition. We can view anything from notes on a desktop to the entire Internet as an extension of
the user's own memory. We can view things like Gmail's automatic email filtering as off-loading
cognitive tasks from the user. In looking at things through this lens, we focus on the output not just of
people of into interfaces, but on the combination of people and interfaces together. So, what are they
able to do together that neither of them could do individually? As we close this lesson, think about this
in terms of your chosen areas of HCI. What are the cognitive components of the areas with that you're
dealing? How do augmented reality and wearable devices off-load some of the user's cognition onto
the interface? And as occasional technology, or in HCI for health care, what are the tasks being
accomplished by the system's comprised of users and interfaces?
In this lesson, we've talked about distributed cognition and a couple related theories. The commonality
of all these theories was their emphasis on context in integrated systems. Distributed cognition is
interested in how cognition can be distributed among multiple individuals and artifacts, all working
together. By taking a distributed view of interface design, we can think about what the combination of
our users and our interfaces are able to accomplish.
Situated action and activity theory give additional perspectives of this, focusing respectively on the
importance of ad hoc improvisation and the need to emphasize users' motives beyond just their goals.
The common ground for all these theories is that our interfaces are not simply between user and their
task, but they also exist in some kind of greater context.
[MUSIC] In 1980, Langdon Winner published a highly influential essay in which he ask, do artifacts have
politics? In other words, do technical devices have political qualities? And the answer is yes. All
toasters are democrats.
So in this lesson, we're going to talk about two dimensions of this, designing for change and
anticipating the change from our designs.
Most commonly in HCI, we're interesting in designing for usability. We want to make tasks easier
through technology. So in a car, we might be interested in designing a GPS that can be used with the
fewest number of taps. Or a dashboard that surfaces the most important information at the right time.
Sometimes we're also interested in designing for research, though. We might design a dashboard that
includes some kind of visualization of the speed to see if that changes the way the person perceives
how fast that they're going. But a third motivation is to somehow change the user's behavior.
Designing for change in response to some value that we have. Often times that may actually conflict
with those other motivations. If we're trying to discourage an unhealthy habit, we might want to make
the interface for that habit less usable. Cars actually have a lot of interfaces created with that
motivation in mind. If I started driving without a seatbelt on, my car will beep at me. Some cars will cap
your speed at a certain number. Those interfaces serve no usability goals but rather they serve the goal
of user safety. Now, that's a simplistic example, but it shows what I call the three goals of HCI. Help a
user do a task, understand how a user does a task, or change the way a user does a task due to some
value that we hold, like safety or privacy.
The most influential paper on the interplay between artifacts and politics, came from Langdon Winner
in 1980. The paper describes numerous ways in which technologies, interfaces, and other artifacts,
demonstrate political qualities, demonstrate political motivations.
For example, he opens by noting the belief that nuclear power can only be used in a totalitarian society
because of the inherent danger of the technology.
One type is inherently political technologies. These are technologies that due to their very design, are
only compatible with certain political structures. Certain technologies like nuclear power, whether due
to complexity, or safety, or resources, require considerable top down organization. Those lend
themselves to authoritarian power structures. Others like solar power, someone might argue, are only
possible in a more distributed and egalitarian society. So these technologies, by their very nature,
dictate the need for certain political structures.
Let's start with the bad news. The ability of interfaces to change behavior can be abused. We're not
just talking about places where people put explicit barriers up like blocking certain people from
accessing their content. There are instances where people create seemingly normal designs with
underlying political motivations. Winner describes one such instance in his essay Do Artifacts Have
Politics? Robert Moses was an influential city planner working in New York City, in the early 1900s. As
part of his role, he oversaw the construction of many beautiful parks on Long Island. He also oversaw
the construction of parkways, roads to bring the people of New York to these parks. That's actually
where the word parkway comes from. But something unfortunate happened.
The bridges along these parkways were too low for buses to pass under them. As a result, public
transportation couldn't really run easily to his parks. And as a result of that, only people wealthy
enough to own cars were able to visit his parks. What an unfortunate coincidence, right? The evidence
shows it's anything but coincidence. Moses intentionally constructed those bridges to be too low for
buses to pass under. As a way of keeping poor people from visiting his parks. His political motivations
directly informed the design of the infrastructure and the design of the infrastructure had profound
social implications. This is an example of winners technology as a form of social order. The bridges
could have been taller. There's nothing inherently political about those bridges. It was the way that
they were used that accomplished this political motivation. As an interesting aside, I learned recently
that the design of Central Park inside New York City was an example of the exact opposite dynamic.
The designers were encouraged to put in places where only carriages could access so affluent people
would have somewhere to go away from poor people. But the designers specifically made the entire
park accessible to everyone. It's not too hard to imagine things kind of like that happening today either.
One of the arguments from proponents of Net neutrality is that without it, companies can set up fast
lanes that prioritize their own content or worse severely diminished content of their competitors or
content critical of the company.
We can design for positive social changes well though. This goes beyond just encouraging people to be
nice or banning bad behavior. Interfaces can be designed that'll lead to positive social change through
natural interaction with the system. One example of this that I like is Facebook's ubiquitous Like button.
For years, many people have argued for a Dislike button to compliment the Like button. Facebook has
stuck with the Like button though, because by its design, it only supports positive interactions. It
dodges cyberbullying, it dodges negativity. For usability purposes, it's a weakness because there are
interactions I can't have naturally in this interface. But this specific part of the Like button wasn't
designed with usability in mind.
More recently, Facebook has added to the like button with five new emotions, love, haha, wow, sad
and angry. Even with these five new emotions though, the overall connotation is still positive. For
three of them, it's obvious why. Love, haha and wow are more positive emotions. Sad and angry are
This also doesn't have to be strictly about dictating change, but it can also be about supporting change.
For example, until a few years ago, Facebook had a more limited set of relationship options. They had
married, engaged, in a relationship, single and it's complicated. As its target audience went from being
college students to everyone, they also added separated, divorced and widowed. But it was a couple of
years after that that they then added in a civil union and in a domestic partnership. Adding these
concepts didn't magically create these social constructs, they existed legally before Facebook added
them here. But adding them here supported an ongoing societal trend and gave them some validity.
And made people for whom these were the accurate relationship labels feel like they really were part of
Facebook's target audience, they were part of modern culture. That an accurate representation of their
relationship status was available on this drop down meant they could accurately portray who they were
on their Facebook profile.
Let's tackle Change by Design by designing something for Morgan. So Morgan has a desk job. That
means she spends a lot of her time sitting. However, for health reasons, it's best for her to get up once
per hour and walk around just for a few minutes. There are a lot of way we could tackle this by
physically changing the design of her environment to a standing desk or by giving her an app that
directly reminds her or rewards her for moving around. But let's try to do something a little bit more
subtle. So let's design something for Morgan's smartphone that gets to move around for a couple
minutes every hour without directly reminding her to walk around or rewarding her for doing so.
So here's one idea. Imagine a weather tracking app that crowdsourced weather monitoring. Every hour
participants are buzzed to go outside and let their phone take some temperature readings, maybe take
a picture of the sky. That design has nothing at all to do with moving around, but that's the side effect
of it. Participation in this seemingly unrelated activity has the benefit of getting people moving.
Pokemon GO is a great example of this in a different context. It doesn't spark the same kind of
intermittent exercise but it gets people to exercise more generally, all without ever actually telling them
to do so.
Positive change doesn't always have to happen by design though. In fact there are numerous examples
of positive change happening more as a bi-product of technological advancement rather than as a goal
of it. In Bijker's, of Bicycles, Bakelites and Bulbs, this is the bicycle example. The story looks at what
women can do before and after the invention of the bicycle. Before the bicycle, women tend to be
pretty reliant on men for transportation. People generally got around with carriages, which were pretty
expensive, so only wealthy people would own them. And so, typically men would own them. So if a
woman wanted to go to a restaurant or go to a show, she typically had to go either with her spouse or
with her father. As a result, society rarely saw women acting individually. They were usually in the
company of whoever the prominent male in their life was at the time. But then the bicycle came along.
The bicycle was affordable enough and targeted at individuals, so now women could get around on
their own. So now a woman could go to a show or go to a restaurant by herself, instead of relying on a
man to take her. In the book though, what Bijker covers is not just the fact that this enabled more
individual transportation, but rather that this enabled a profound social shift. This technological
innovation allowed women to start acting independently. And it also demanded a wardrobe change,
interestingly enough, because you couldn't wear a dress on a bicycle. So the invention of the bicycle
simultaneously changed women's' attire, and changed the level of independence they could show in
modern society. And both these changes force society to challenge traditional gender roles. The
bicycle's role in women's liberation was so significant that Susan B Anthony actually once said, I think
bicycling has done more to emancipate women than anything else in the world. But when the bicycle
was invented, it's doubtful that the inventor sat down and said surely this will be a great way to
emancipate women and change our culture's gender roles. That's not what they had in mind. They
were inventing a technological device. But as an unintended positive side effect, that technological
device profoundly changed society.
Just as we can create positive changes by accident, if we aren't careful, we can also inadvertently create
negative changes as well, or further preserve existing negative dynamics. A good example of this is the
proliferation of the Internet in the first place. When the Internet first came along, it piggybacked on
existing phone lines. Then it started piggybacking on more expensive cable TV lines. And now it's
following along with very expensive fiber optic lines. At every stage of the process, areas with more well
developed infrastructure get the latest Internet speeds first. However, generally the areas with well
developed infrastructure are the wealthier areas in the first place, either because wealthier citizens
paid for the improved infrastructure. Or because people with the means to move wherever they want
to move, will move somewhere with better infrastructure.
High speed internet access is a big economic boon. And yet areas that are already economically
advantaged are generally the first ones to get higher speed internet access. Even today, in poorer parts
of the United States the only available Internet connections are slow, unreliable satellite connections
with strict data caps. And in the rest of the world this issue can be even more profound, where many
In HCI, we describe the idea of interfaces becoming invisible. Some of that is a usability principle, but it
also applies more broadly to the way that interfaces integrate themselves into our everyday lives. And
if our interfaces are going to integrate into people's lives, then they need to share the same values as
those individuals as well. This connects to the field of value sensitive design. The Value Sensitive Design
Lab at the University of Washington defines this idea by saying, value sensitive design seeks to provide
theory and method to account for human values in a principled and systematic manner throughout the
design process. In this way, value sensitive design is another dimension to consider when designing
interfaces. Not only is an interface useful in accomplishing a task and not only is it usable by the user,
but is it consistent with their values?
One of the most well-developed application areas of value sensitive design is privacy by design. Privacy
is a value, and privacy by design has aimed to preserve that value in the design of systems. It's possible
Batya Friedman is one of the co-directors of the Value Sensitive Design Research Lab at the University of
Washington. And she co-authored one of the seminal papers on the topic, Value Sensitive Design and
Information Systems. Friedman, Kahn, and Borning together provide this excellent paper on the
philosophy. In it, they cover three investigations for approaching Value Sensitive Design.
First they cover conceptual investigations. Conceptual investigations are like thought experiments
where we explore the role values play in questions like, who are the direct and indirect stakeholders?
And how are both classes of stakeholders affected?
Third, they cover technical investigations. Technical investigations are like empirical investigations that
target the systems instead of the users. They ask the same kind of questions, but they are especially
targeting whether or not the systems are compatible with the values of the users.
And value sensitive design distinguishes between usability and human values. If you're planning to work
in an area where human values play a significant role, and I would argue that that's probably most areas
of HCI, I highly recommend reading through this paper. It can have a profound impact, not only on the
way you design interfaces, but on the way you approach user research.
One of the challenges with value sensitive design is that values can differ across cultures. The internet
makes it technologically possible to design single interfaces that are used by people in nearly every
country, but just because it's technologically possible doesn't mean it's practically possible. And one
reason for that is different countries and cultures may have vastly different values. A relatively recent
news worthy example of this occurred with the rights to be forgotten. The right to be forgotten is a law
in the European union, that allows individuals some control over what information is available about
them online. That's a value held by the European Union. However, technologies like Google were not
generally developed with that value in mind. So there's actually been an extraordinary effort to try to
technologically support that right to be forgotten, while still providing search capabilities. Making this
even more complicated is the fact that the value isn't universally shared. Many people argue that the
law could actually effectively become internet censorship. So now we start to see some conflict in the
values between different cultures. One cultures value of privacy, might run awry of another cultures
value of free speech. If we're to design interfaces that can reach multiple cultures, we need to
understand the values of those cultures. Especially if it might force us to design different systems for
different people in order to match their local values.
Here are five tips for incorporating value sensitive design into your interfaces. Number 1, start early.
Identify the values you want to account for early in the design process, and check on the throughout
the design process. The nature of value sensitive design is that it might have significant connections,
not just to the design of the interface, but to the very core of the task you're trying to support. Number
2, know your users. I know I say this a lot but in order to design with values in mind, you need to know
your users values. Certain values are incompatible with one another or at least present challenges for
one another. Privacy, as a value, is in some ways in conflict with the value of record keeping. To know
what to design, you need to know your users values. Number 3, consider both direct, and indirect
stakeholders. We usually think about direct stakeholders. Those are the people that actually use the
system that we create. Value sensitive design encourages us to think about indirect stakeholders as
well. Those are people who do not use the system, but who are nonetheless affected by it. When
you're designing the internal system for use by a bank for example it's used by bank employees but
bank customers are likely to be impacted by the design. Number 4, brainstorm the interface's
possibilities. Think not only about how you're designing the system to be used, but how it could be
used. If you wanted to make a system that made it easier for employees to track their hours, for
example, consider whether it could be used by employers to find unjust cause for termination. Number
5, choose carefully between supporting values and prescribing values. Designing for change is about
prescribing changes in values, but that doesn't mean we should try to prescribe values for everyone. At
the same time, there are certain values held in the world that we would like to change with our
interfaces if possible with regard to issues like gender equality or economic justice. Be careful, and be
deliberate about when you choose to support existing values, and when you choose to try to change
them with your interfaces.
The idea of artifacts or interfaces having political clout, brings up two challenges for us as interface
designers. First, we need to think about places where we can use interface design to invoke positive
social change. And second, we also need to think about the possible negative ramifications of our
interfaces. What undesirable stereotypes are we preserving or what new negative dynamics might we
create? Now obviously, I work in online education and I have been struck by both sides of this. On the
positive side, I've been amazed by the power of online education to diminish significance of superficial
obstacles to people's success. I've spoken with people who have had difficulty succeeding in traditional
college settings due to social anxiety disorders or other disabilities. Things that had no real connection
to how well they understood the material, but they made it difficult to interact with other people or to
attend the physical classes. But by putting everything in forums and emails and texts and videos,
they've been able to overcome those obstacles, but there's also the risk that online education will only
benefit people who already have advantages. The early data suggests that the majority of consumers of
online education are middle class, American white males. There's little data to suggest that it's reaching
minorities, reaching women, reaching international students or reaching economically disadvantaged
students. And while I believe that's a problem that can be solved, it's certainly something we need to
address. Otherwise, we risk online education being a luxury more than an equalizer. So, that's how
these principles relate to online education. Take a moment and reflect on how they apply to the area of
HCI that you chose to explore. What role can your technology play in creating positive societal change
and what risks are there if your technology catches on?
We've talked a good bit about how technology and interfaces can affect politics and culture and society,
but we wouldn't be telling the whole story if we didn't close by noting the alternate relationship as well.
Political relationships and motivations can often have an enormous impact on the design of technology.
From Bijker's book Of Bicycles, Bakelites, and Bulbs, the bulbs part refers to the battle of the design of
the first flourescent lightbulb in 1938. General Electric created a new kind of light that was far more
energy efficient. The power companies were afraid that this would reduce power consumption and cut
into their profits. After a long drawn out battle involving the Anti Trust Division of the US government
and the US Department of War, the fluorescent bulbs that were ultimately sold were not as good as
they technologically could be in order to preserve others' business interests. That issue is more
prevalent today than ever. More and more, we see compatibility between devices and usage policies
for technologies determined not by what's technologically possible but by what satisfies political or
business needs. So here's an example. To keep up with everything that I like to watch on TV I have five
different subscriptions. I have cable TV, I have Hulu, I have Amazon Prime, I have Netflix and I have an
HBO subscription on top of my cable subscription. And that's not to mention things that I watch for
free on their own apps like Conan or anything on YouTube. And you might think wouldn't it be
awesome to just have one experience that could navigate among everything I want to watch. And it
would be awesome, and there's no technological reason against it. But there's a complicated web of
ownership and licensing and intellectual property agreements that determine the way that technology
works. Technology changes society but society changes technology too.
You have almost certainly experienced political or business motivations changing the way in which a
technology of yours works. Similar to the fluorescent light bulb, often times these motivations are to
preserve the power or profit of an influential organization in the face of radical change. Sometimes
they might be the products of a relationship or an agreement between vendors or organizations to
emphasize one another's content. Generally, these are instances where technology either performs
sub-optimally or has certain features because someone besides the user benefits. So reflect for a
second and see if you can think of an instance where some technology you use was designed with this
kind of political motivation in mind.
This question can have some pretty loaded answers and I encourage you to give those answers. But I'm
going to give a slightly more innocuous one, exclusivity agreements in video games. Imagine I'm a video
game developer and Amanda is Nintendo. And I'll say hey, Nintendo I'll agree to release my game only
on your console, if you agree to promote my game in your console advertisements. I benefit from free
advertising, Nintendo benefits from getting a selling point for its console. There's probably no
technological reason my game can't run on other consoles. But there's this business relationship
determining the way that the technology works.
In this lesson, we've discussed the different ways in which interfaces interact with existing power
structures or political motivations. We looked at how interfaces can have negative repercussions,
either by design or by happenstance.
We looked more optimistically at how interfaces can be powerful tools for equality and justice in the
world, whether intentionally or accidentally.
[MUSIC] In this unit we've talked about the various different design principles that have been
uncovered after years of research and work in HCI. And while they are presented in many ways as
individual sets of guidelines and principles, there is a lot of overlap among them.
So in this recap of the unit, we'll try to tie all these seemingly different threads together. We'll also ask
you to reflect on how you might apply some of these concepts as a whole to certain design problems.
One way of knitting together the different ideas of HCI is to start very close and zoom out. At the
narrowest level, we might view HCI as the interaction between a person and an interface.
This is the processor model of the role to human knit system. This too looks almost like an interaction
between two computers, one just happens to be a human.
The GOMS model approaches HCI in this manner as well. It distills the human's role into goals,
operators, methods and selection of rules, all of which can be externalized. But this is a pretty narrow
view of HCI.
For the most part, we're interested in something more task-focused. In fact, this is where we'll likely
spend the majority of our time. This is the user interacting through some interface to accomplish some
task.
That's what we mean by the predictor model of the user. The user is actively involved in looking at the
task, making predictions about what to do, and making predictions about what will happen.
Here we also look at how the interface can ideally disappear from this interaction, making the user feel
like they're working directly with the task, not interacting with an interface.
To help users more quickly make sense of the interface, and understand the underlying task.
We have to understand their mental models and in turn, we have to help make sure their mental
models match the actual task. Here we have to get into questions like understanding the user's errors
and understanding the mapping between representations and the understanding tasks. We also have
to address questions like expert blind spot and learned helplessness.
Now, fortunately, we have a tool to help us with this. Cognitive task analysis and its related hierarchical
task analysis. So much of what we deal with in HCI occurs within this process of a human completing a
task through some interface.
However, that's not all we're interested in. We're also interested in how this interaction occurs beyond
just the individual and the interface and the task.
That's what was meant by the participant model of the user. The user is not merely interacting with an
interface or interacting with a task through an interface. They're interacting with other interfaces,
other individuals and society as a whole. They are active participants in the world around them.
That's what activity theory advocates. Treating the unit of analysis not as a task, but as an activity
including some elements of the context surrounding the task.
Or other times we are interested in deeply understanding the situated context in which a person is
acting, that's where situated action comes in.
Other times we're interested in dynamics even broader than this. Sometimes, we're interested in how
the interfaces we design can create positive social change.
That's exactly the goal of some of our design guidelines as well. To use interfaces to create a more equal
society for all people.
When we started that conversations I commented that when you do HCI right, users won't actually
know you've done anything at all. Good HCI disappears between the user and the tasks that they're
completing. As a result, people can underestimate how complex good HCI can be to leverage. In this
unit, one of my goals has been to help pull back the curtain on all the theories that go on behind the
scenes of the designs of some of the interfaces that you use everyday. So as we close this unit, take a
second to reflect on the interfaces you use that have disappeared between you and the task. Focus
especially on interfaces that were never visible in the first place, not interfaces that became invisible by
learning. Which of the principles that we've discussed do you now see at play in the design of some of
those interfaces?
For me, I'm actually using an example of one of these interfaces right now. I have a teleprompter in
front of me. I didn't always use a teleprompter, but as soon as I tried one, I was hooked. And part of
that is because of how well designed the interface is. The very first time I used it, it made things
immediately easier, instead of introducing a new learning curve. First, it uses very simple interactions,
quick presses that accomplish anything I could need during the actual recording process. Second, it
builds on a good mental model of the task that I'm performing. It understands, that while recording, the
only things I need to do regularly are pause, play, scroll back, and scroll forward. There are a lot of
other options that it has but it keeps those out of the way during the actual recording process, because
they're not necessary at the time. Personally though, I think the teleprompter is great to analyze from
the prospective of distributed cognition. I struggle when recording with talking too fast. Without the
teleprompter, I have to remind myself to slow down, while also remembering what to say. That's a lot
to keep in memory at the same time. The teleprompter lowers the cognitive load involved in
remembering what to say, but it also controls my speed because I can't read what hasn't yet appeared.
So the teleprompter takes care of two cognitive processes. Remembering what I have to say, and
monitoring the speed at which I am presenting. So the system comprised of the teleprompter and me
is better at recording than I am alone.
Let's go back to our original design challenge for Morgan from the very beginning of this unit. We
talked about how Morgan wants to be able to listen to audio books on the go which includes things like
leaving bookmarks and taking notes. Using everything we've talked about so far, revisit this problem.
Start by thinking narrowly about the physical interactions between Morgan and the interface and then
zoom out to her interactions with the task as a whole, then zoom out even further to how the
interaction between Morgan and the interface or relates to other things going on in the world around
here. And last, think about how interfaces like this have the potential to effect society itself.
There are a lot of designs that you could propose, but the question here isn't what you design, the
question is, how will you develop it? How will you evaluate it? How do you know which ideas are good
and which are bad? Now we've given some heuristics and principles for doing this, but that doesn't
automatically get you to a good interface. That just kind of establishes a baseline. That's just what the
principles portion of this course covers. To fully develop interfaces using these principles, we need the
methods of HCI as well.
Throughout this unit, I have repeatedly asked you to revisit the area of HCI you chose to keep in mind
throughout our conversations. Now, take a second and try to pull all those things together. You
thought about how you chose an area applies to each of the models of the human's role. How it applies
to the various different design guidelines and how it interacts with society and culture as whole. How
does moving through those different levels change the kinds of designs you have in mind? Are you
building it from low level interactions to high level effects? Are you starting at the top with the desired
outcome and working your way down to the individual operations? There are no right or wrong
answers here. The important thing is reflecting on your own reasoning process.
I mentioned in one of our first conversations that this course is distinct from user interface design
course. We're interested in general principles and general methods for designing interactions between
humans and various different kinds of computational devices. Designing for traditional screens has its
own set of specific principles. However it's quite likely that you'll be designing for this kind of
traditional device. Here are five quick tips for designing effective on screen user interfaces. Number
one, use a grid.
Grids are powerful ways of guiding user site around your interface, highlighting important content,
grouping together content and so on.
Number two, use whitespace. Users are very good at consuming small chunks of information at a time.
Notice how news articles often use very short paragraphs and highway signs have lots of whitespace
around the text. Whitespace works with grids to provide context and guide the users visual perception
of the interface. Number three, know your Gestalt principles. Gestalt principles in UI designer refer to
how users perceive groups of objects.
Number four, reduce clutter. The eye has difficulty processing cluttered information, so reduce clutter
wherever possible. Grids, whitespacing, Gestalt principles can help with this, because they invisibly
communicate content that might otherwise need to be communicated visually. Instead of drawing a
box to group controls together, you can surround them with whitespace. Instead of using headers and
text to label different portions of some content, you can separate them with a grid, and so on. Number
five, design in grayscale. Color can be a powerful tool but it also runs array a universal design. There are
enough color blind individuals in the world, that rely on that color too much, it's really problematic.
Color can emphasize a content and structure of your interface but it shouldn't be necessary to
understand it.
Which is a problem if you are deuteranopic or red, green color blind. But the red light is always at the
top and the green light is always at the bottom. Even if you are deuteranopic, you can still understand
what the light is saying. Color emphasizes the content. But the content doesn't rely on the color. If
you're going to be doing a lot screen based UI design, I do highly recommend taking a class on the topic.
It will cover these principles in a lot more depth, give stronger rules for designing good grids and using
whitespace, and cover one of my favorite topics. Typography.
There's one final thing you must understand about the guidelines and heuristics and principles we've
talked about. They're only half the picture. They're necessary for good design, but they aren't
sufficient. You can't just grab these guidelines off the shelf, throw them at a new task, and expect to
have a perfect interface the first time. These principles give you a solid foundation. But every domain,
every task, every audience has its own unique requirements and criteria. To design usable interfaces,
you have to understand your specific user. And remember, that isn't you. What that means is that you
have to go out and find what your users need. You have to get inside their head and understand the
task. You have to prepare multiple different ideas for them to try out. You have to evaluate those ideas
with real users. And you have to take that experience and use it to improve your design and start the
process all over again. These guidelines and principles are useful in making that process as a efficient as
possible. But they aren't sufficient on their own. These principles are only half the picture, the other
half are the methods.
[MUSIC] When you start a design new interface, your goal is to design a interface that will meet the
needs of the users better than the existing design. It's very rare that we design for tasks that users have
never even perform before or almost always developing new ways to accomplish old tasks. Facebook,
for example, was a new way to socialize and communicate with others, but the fundamental activities
of socializing and communicating weren't new. Facebook just met those needs better. Or at least met
certain needs better depending on who you ask. In order to design interactions that are better than
existing designs, it's important to take into consideration the users' needs at every stage of the design
process. That's what this unit of this course will cover.
So in this unit we'll cover the Design Life Cycle as well as methods for gathering feedback and
information from users at every stage of the cycle. However, before we get started on that, we need to
set up some basic concepts we used throughout our discussions.
And then we'll introduce the four stages design life cycle. We'll discuss a few general methods for
pursuing the design life cycle.
User-centered design is design that considers the needs of the user throughout the entire design
process. As far as we're concerned, that's pretty much just good design but oftentimes that isn't the
way design is done. Design is often done to meet some functional specifications of what the tool must
technically be able to accomplish, instead of considering the real needs of the user.
Or sometimes, people will go through an entire design process believing they understand the needs of
the user without ever really checking. User-centered design is about prioritizing the user's needs while
also recognizing that we don't know the user's needs. So we need to involve them at every stage of the
process. Before we start, we need to examine the users needs in depth, both by observing them and by
asking direct questions. After we start designing, we need to present our design alternatives and
prototypes to the user, to get feedback. And when we near a design that we like, we need to evaluate
the quality of the design with real users. Having a good working knowledge of HCI principles helps us go
through this more quickly.
The International Standards Organization, has outlined six principles to follow when pursuing user-
centered design. Number one, the design is based upon an explicit understanding of users, tasks and
environments. That means that we must gather information about the users, the tasks they perform,
and where they perform those tasks, and we need to leverage that knowledge throughout the design
process.
Number two, users are involved throughout design and development. Involvement can take on many
forms, from regularly participating with interviews and surveys about designs and prototypes to actually
working on the design team alongside the designers themselves.
Number 4, the process is iterative. No tool is developed once, released, and then abandoned. Designs
undergo constant iteration and improvement, even after being released.
Number six, the design team includes multidisciplinary skills and perspectives. Good teams for pursuing
user-centered design include people with a number of different backgrounds, like psychologists,
designers, computer scientists, domain experts, and more. So keep these principles in mind when
you're doing user experience design.
When we talk about user-centered design, we throw around the word user as if it's pretty obvious what
it means. The user is the person who uses the interface that we create, right? However, that's not the
only person in whom were interested. There are multiple stakeholders in this design, and we want to
explore how our design is going to affect all of them. The user themselves is what we call the primary
stakeholder. They're the person who uses our tool directly.
Secondary stakeholders are people who don't use our system directly, but who might interact with the
output of it in some way.
Tertiary stakeholders are people who never interact with the tool or even interact with its output, but
who are nonetheless impacted by the existence of the tool. So let's take a couple of examples of this.
Imagine we're designing a new grade book tool that makes it easier for teachers to send progress
reports to parents. Teachers would interact with the tool, inputting grades and feedback. And so
You might actually come from a software engineering background, and so while user-centric designs
sounds obvious to some people, you might have experienced the other side of the coin. In many
industries and domains, software engineers are still left to design user interfaces themselves. There's a
fantastic book about this called The Inmates Are Running the Asylum, by Alan Cooper. Where he
compares technology to a dancing bear at a circus. He notes that people marvel at the dancing bear,
not because it's good at dancing, but because it dances at all. The same way, people marvel at certain
pieces of technology not because they work well, but because they work at all. The book was released
in 2004 and since then the user has become more and more a focal point of design. And yet there are
still places where individuals with little ACI background are designing user-facing interfaces for one
reason or another. Since it's a stronger chance you've worked in software engineering, reflect on that a
bit. Have you seen places where software engineers, data scientists, or even non-technical people were
put in charge of designing user interfaces? How did it go?
I encountered this in my first job, actually. Somewhere between my freshman and sophomore years at
Georgia Tech, I had a job as a user interface designer for a local broadcast company. I designed an
interface, then I handed it over to a team of engineers for implementation. Late in the process, the
requirements were changed a bit and new configuration screen was needed that the engineers just
went ahead and looked up. We got the finish tool and it all worked beautifully and perfectly, except for
this configuration screen. It was a list of over 50 different settings, each with 3 to 5 radio buttons to the
side. Each setting was a different length. Each radio button label was a different length. There was kind
of placed all over the canvas. There was no grid. It was illegible. It was unusable, but it was technically
functional. It met the requirements described in terms of what the user must be able to do, just not
how usable it was. Fortunately, there's a greater appreciation of the value of user-centered design
now, than there was then. So, many spaces have become so crowded that the user experience is what
can really set a company apart. I've actually been noticing a trend lately toward new user experiences
around really old tasks. I use an app called stash to buy and sell small amounts of mutual funds. Buying
mutual funds has been around forever and E-Trade has even been doing that online for a long time.
What differentiates stash is the new user experience. Automated investing, simple tracking, simplified
summaries. User experience design really has become a major differentiator between success and
failure.
User centered design is about integrating the user into every phase of the design life cycle. So, we need
to know two things. What the design life cycle is and how to integrate the user into each phase. Now if
you look up design life cycles you'll find a lot of different ideas. We're going to discuss in terms of a four
phase design life cycle that's pretty general and probably subsumes many of the other ones you'll find.
The first part of this is Needfinding. In Needfinding, we gather a comprehensive understanding of the
task the users are trying to perform. That includes who the user is, what the context of the task is, why
they're doing the task, and any other information related to what we're designing. Second, we develop
multiple design alternatives. These are very early ideas on the different ways to approach the task. It's
important to develop multiple alternatives to avoid getting stuck in one idea too soon. The third step is
prototyping. We take the ideas with the most potential and we build them into prototypes that we can
then actually put in front of a user. Early on we might do this in very low fidelity ways like with a paper
and pencil, or even just verbally describing our ideas. But as we go on we refine and improve. Fourth,
and most importantly, we perform user evaluation. We take our ideas that we prototyped and put
them in front of actual users. We get their feedback, what they like and what they don't like, what
At every stage of this design life cycle, we're interested in gathering information from the user to better
inform our designs. To do that, we need a number of methods to actually obtain that information.
Fortunately, there are a number of methods we can employ to try to gather the information we need.
And in fact, the majority of this unit will go through all these different methods. These will become the
tools in your toolbox. Things you can call upon to grab the information you need when you need it. Not
the many of this method are so well-developed that you can cover them in the entire units, or even
entire courses. For example, we'll spend about three or four minutes talking about naturalistic
observation. And yet, there are entire textbooks and courses on how to do naturalistic observation
really well. The goal of this is to give you enough information to get started and enough to know what
you need to explore next. Remember, one of the original goals of this class was not just to understand
more HCI but also to understand how big the field of HCI actually is.
When we talk about feedback cycles, we talk about how they're ubiquitous across nearly every field.
And each CI itself, isn't any different. In a feedback cycle, the user does something in the interface to
accomplish some goal. And then judges based on the output from the interface, whether the goal was
accomplished, then they repeat and continue. In HCI, we're brainstorming and designing interfaces to
accomplish goals. And then based on the output of our evaluations, we judge whether or not the goals
of the interface were actually accomplished, then we repeat and continue. In many ways, we're doing
the same things that our users are doing, trying to understand how to accomplish a task in an interface.
Only in our case, our interface is the tools to build and evaluate interfaces and our goal is to help them
accomplish their goal.
There is one final distinction we need to understand going forward because it's going to come up at
every stage of the design lifecycle, qualitative versus quantitative data. At every stage of the design
lifecycle, we're interested in gathering data from users. Early on that might be descriptions of what
they do when they're interacting with a task. Or it might be measures of how long certain tasks take to
complete, or how many people judge a task to be difficult. Later on, though, it might be whether or not
users prefer our new interfaces or how much better they perform on certain tasks.
Now data will always fall into one of two categories, qualitative and quantitative. Quantitative is
probably the easier one to describe, quantitative data describes anything numeric.
Qualitative Data covers descriptions, accounts, observations, it's often in natural language. It could be
open ended survey responses, or interview transcripts, or bug reports or just your personal
observations.
I've heard it described quantitative data provides the what, the qualitative data provides the how or the
why. When performing need finding or when doing some initial prototype evaluations, we're likely
interested in users' qualitative descriptions of their tasks or their experiences with the interface. It's
generally only after a few iterations that we start to be interested in quantitative analysis, to find
numeric improvements or changes.
We can also use these in conjunction with one another, collecting both quantitative and qualitative data
from the same participants. That's referred to as a mixed method approach, it's a mix of qualitative and
quantitative data to paint a more complete picture of the results.
Let's do a quick exercise on quantitative versus qualitative data. Let's imagine we're doing end of course
evaluations for some class. For each of the following types of data, mark whether it would be
considered quantitative or qualitative. You can skip ahead, if you don't want to listen to me read all
these out. We have responses to on a scale of 1 to 5, rate this course's difficulty, responses to how
much time did you spend per week on this course, responses to what did you like about this course,
count of students mentioning office hours to the above questions, percentage of students that
completed the survey, responses to a forum topic requesting non-anonymous course reviews, the
number of participants in a late-semester office hours session, and the transcript of the conversation of
that late-semester office hours session.
Quantitative data is numeric. Any of these things that can be measured numerically, really qualify as
quantitative data. Many of these things are things that we can just count. We count the number of
students mentioning office hours. We count the number of participants. Here we basically count the
number of students that completed the survey and divided by all the students in the class. And here we
have them count how many hours per week they think they spend in the course. The first option can be
As we go through the unit in this course on methods, we're going to take a running example as a design
challenge to explore the different HCI research methods. I'm going to choose a challenge quite close to
home for me. Improving the MOOC recording process.
As we go through the lessons in this unit, we're going to talk about the design life cycle. The process of
finding user needs, brainstorming design alternatives, prototyping interfaces and evaluating those
prototypes with users. Now depending on how you're taking this material, you might do this on your
own with projects or assignments. Some of the most interesting applications of this material, though,
are the emerging areas which might be outside the scopes of the assignments that you're doing. So as
you're going through lessons in this unit, try to brainstorm a conceptual plan for how you might pursue
the design life cycle in your area of interest. There are interesting issues that arise, unique different
domains or application areas. In educational technology, for example, you need to take into
consideration lots of stakeholders, students, parents, teachers, administrators and more. In virtual
reality or wearable devices, you need to think a lot more about the technical constraints in finding
creative ways to get around technological limitations to interaction. In computer supportive
cooperative work, you might want to explore places where the phenomenon already exists like
Wikipedia or geographically distributed companies. Throughout this unit, we'll revisit the stages of the
design life cycle and explore how it might apply to your area of interest.
In this lesson we introduce the concept of the design life cycle. There are a lot of versions of the design
life cycle out there, but we're going to focus on the most general four-step cycle.
The cycle starts with needfinding, and then goes to constructing design alternatives, then to
prototyping, and then to user evaluation, and then the cycle repeats.
[MUSIC] Before we start working with real users there are a few ethical considerations we have to
make. If you are doing a research as part of a university these are part of your contract with the
university to do research on their behalf. Even if you are doing research independently for an industry
there are still ethical obligations to follow. These considerations are important not only to preserve the
rights of our users, but also to ensure the value of the data that we gather.
In this lesson, we'll talk a little bit about where these kinds of ethical considerations came from.
We'll also talk about Institutional Review Board, or IRB, the university organization that governs human
subjects research.
In the first half of the 20th century, a number of pretty clearly unethical human subjects experiments
took place. Many of them were conducted by scientists working for the Axis powers during World War
II. But famously, many were also conducted right here in our own backyard, here in the United States.
Among them were Milgam's obedience experiment where participants were tricked into thinking that
they had administered lethal shocks to other participants to see how obedient they would be.
There was the Tuskegee syphilis study where rural African American men were intentionally injected
with syphilis to study its progression.
In response to all this, the National Research Act of 1974 was passed, which led to the creation of
institutional review boards to oversee research at universities.
Among other things, the law dictated that the benefits to society must outweigh the risks to the
subjects in the case of these experiments. It also dictated that subjects must be selected fairly, which
was a direct response to the Tuskegee syphilis study. And perhaps most importantly, it demanded a
rigorous informed consent procedures. So, the participants know what they're getting into and can
back out at any time. These efforts all attempt to make sure that the positive results of research
outweigh the negatives and that participant rights are always preserved.
In this lesson, we're largely focusing on the practical steps we go through to get approval for human
subject's research. But before we get into that, I want to highlight that this isn't just a bunch of
bureaucratic steps necessary to make sure the people are treated ethically at all stages of research, IRBs
main task is to make sure the potential benefits of the study are worth the potential risks. So as part of
that, part of the role is to make sure the potential benefits are significant. A lot of the steps of the
process are going to ensure that the data that we gather is useful. So for example, the IRB is sensitive
about the perception of coercion. When participants feel coerced to participate in research, the data
they actually supply may be skewed by that negative perception which impacts our results. Similarly,
we might design studies that have some inherent biases or issues to them. We might demand too
much from participants or ask questions that are known to affect our results. Much of the initial
training to be certified to perform research is similarly not just about doing ethical research but also
about doing good research. By recording who is certified, IRB helps ensure that research personnel all
understand the basics of human subjects research. IRB is also there to monitor for these things as well
and many of the steps of this process ensure that the research we perform is sound and useful. After
all, if the research we perform is not useful, then even the smallest risks will outweigh the nonexistent
benefits.
If you're going to be doing a research just part of a University project or University class, you need IRB
approval. Different universities have different processes and policies for getting started with IRB.
We're going to discuss the Georgia Tech policies and sets where these class is based but you should
check with your university if you're from somewhere else to make sure you're following the right
policies for your school. To get started, we need to complete the required training. So here I'm showing
the IRB website, which is researchintegrity.gatech.edu/irb. And to get started, you need to complete
your required training. So click required training over on the left.
This'll take you to a page that overviews the IRB required training and gives you a link of the left to
begin your CITI training.
And you're going to want to complete Group 2, Social and Behavioral Research Investigators and Key
Personnel. I can't show you exactly what it looks like to sign up fresh because I've already completed it.
But you should be able to add a course and add that as your course. After you've completed CITI
training you'll receive your completion report and then you'll be ready to get started with IRB.
After you've completed any necessary training you can access the IRB application for your own
university. We're doing this in terms of Georgia Tech, so here's the tool we used called IRBWISE.
Here under my protocols, you'll see a list of all of the protocols to which you're added. A protocol is a
description of a particular research project. It outlines the procedures that the IRB has approved
regarding consent, recruitment, experimentation and more. Here we see approved protocols.
Protocols that are new and haven't been submitted yet. Protocols that are waiting for the IRB to act on
them. And amendments to protocols. Amendments are how we change earlier protocols to add new
researchers or change what we're approved to do. After a protocol is approved, any changes must be
submitted to the IRB as an amendment to be accepted separately.
Generally speaking, you might not ever be asked to complete a protocol yourself. You might instead
just be added to an existing protocol. Still, you need to make sure to understand the procedures
outlined by any protocol to which you're added, because they still govern what you do. So we'll run
through the process of creating a protocol, but this will also cover the details to know about any
protocol to which you are added. So for this, I have a draft protocol covering our user study on people
who exercise.
Every protocol starts with the title of the protocol, and some certified personnel. These are required
just to save the protocol.
We would enter their name, select their role, and if their certification isn't already in the system, we
would attach it here.
The protocol description covers the study at a high level. This should briefly touch on what will be done,
what the goal of the research is and what subjects will be asked to do. It doesn't cover all the details,
but it covers enough for someone to understand generally what we're going for with this study.
Depending on what we're doing, we may need to provide data collection methods. This includes things
like surveys, pre-tests and post-tests interview scripts, anything pre-prepared to elicit data for the
participant. Then we also need to fully describe the potential benefits of the study. Remember, IRB is
about making sure that the benefits out way the risks. If there are no benefits, then the benefits can't
out way the risks. Benefits don't need to be to the individual participants though, but they could be to
the greater community as a whole.
Then we describe the plan for the statistical analysis if we have one. Qualitative research might not
have a statistical analysis plan, so that's why I've left this blank. Finally, we need to describe start and
end dates of the research. Very often, this will break the research into a data collection phase and a
research phase, where we actually analyze the data that we collected. Now we won't generally need to
worry about many of the remaining options on this form, because we're not doing clinical studies and
we generally don't have external funding unless you're working on a professor's research project. So
now let's move on to subject details.
Because we're interested in human-computer interaction, we almost certainly will have human-subject
interaction. So when it asks, will the research involve direct interaction with human subjects, we would
click, yes.
That will bring us to a screen where we describe our subjects and the data we plan to collect from
them. First, they want to know how many subjects we have and what genders. They want to make sure
that we're not wasting participants' time if we're not going to actually analyze all their data. And that
we're being fair to the different genders. A common problem in early research was over-representing
male populations.
Third, they want the scientific justification for the number of subjects to enroll in our study. Like I said,
they want to make sure that we're not going to waste the time of a bunch of participants and then just
throw their data out. That wouldn't be very respectful for our participants' time. If we're doing
statistical tests this might be the number of participants necessary to find a certain effect size, which
we'll talk about when we talk about quantitative research. If we're doing qualitative research, this is
usually a smaller number and is more based on how many different perspectives we need to get a good
picture of what we're trying to analyze. Alternatively, we might have external limits on our number of
participants. For example, if you're doing classroom research, your maximum number of students is the
maximum number of students in the class.
As before, we can skip the questions, for our purposes at least, that are more based on clinical research.
But at the bottom, we need to provide the steps necessary to ensure additional protection of the rights
and welfare of vulnerable populations. For example, if we're working with 10 year olds, how do we
make sure 10 year olds really understand that they really do have the right to opt out of this research?
We need to provide a plan for that here if we're working with a vulnerable population.
And last, we'll note the kind of compensation we plan to provide the participants. Many times we
won't compensate our participants at all. But if we're doing a bigger research study that actually has
some external funding, it's pretty normal to give them some sort of monetary compensation for
participation. It's we're using the George Tech subject pool, our compensation will often be extra credit
in a certain class, very often a psychologist class. Note that the recruitment procedures are very
important, especially if you're doing something like classroom research, where there can be a very
significant perception of coercion to get people to participate. If I, as an instructor, am recruiting
students in my own class to participate in my research project, I have to be very clear that it won't
come back to haunt them if they choose not to participate.
One of the most important elements of IRB approval is consent, that was one of the things created by
the Belmont Report. If we're doing any interaction with our human subjects, we definitely need to
think about consent procedures.
On the consent information page, first, we need to indicate what kind of consent we'll receive. Most
commonly, this will be written consent required. In this case, participants will sign or digitally sign, a
consent form to start the study, but in some cases a waiver may be obtained.
We might also receive a waiver of documentation of consent. This occurs for low risk research, where
the written consent itself, would be the only record of the participants identity. This applies to a lot of
survey research or unrecorded interviews, where participants can be informed of their rights at the
start, and their own continued participation constitutes continued implied consent. There's no reason
to have them sign a consent form, because that consent form is the only reason we'd ever be able to
identify them after the study.
If we're involving children, non English speakers or other at risk populations in our study, there may be
some additional boxes to complete.
Finally, it's also possible to have protocols where deception or concealment is proposed. In HCI, for
example, we might want to tell participants that an interface is functioning even if someone behind the
scenes is actually just making it look functional, so that we get good research data out of those
participants. For example, if we were testing something like a new version of Siri, we might tell
participants that it's functioning, when in reality someone is writing the responses by hand. If we're
using deception or concealment like that, we'll indicate so here.
At Georgia Tech the Office of Research Integrity Assurance provides consent form templates that we
can tweak to match the specific needs of our study. The templates provide in depth directions on what
to supply. Generally, this is where we disclose to participants the details of the rest of the protocol.
What we're researching, why, how, and why they were invited to participate?
Of the remaining fields the only one we're likely interested in is the data management questions. The
majority of the others cover either clinical research or biological research or other things that we
hopefully won't touch on very much in human computer interaction. Unless the field has changed a lot
by the time you're listening to this. Nonetheless though you should actually peek into the items to make
sure that they don't apply to you if you're filling out a protocol like this. Under the Data Management
section, we'll want to describe how we keep participants data safe. That'll include descriptions of the
way we store it and define information about participants. And how we'll safeguard the data itself
through password protection or encryption or anything like that.
Finally, there are always some questions that we answer for all studies even though they generally
won't apply to us. Generally our studies won't involve the Department of Defense, generally they
Finally, at the bottom, there's a place to upload some additional documents. This is where we would
supply things like an interview script, a survey, a recruitment script and other documents that the IRB
would want to see and approve. When we're done, we can click Save and Continue Application.
Institutional review boards govern any research institutions that receive support from the federal
government. But what about research that doesn't receive any federal support? Very often, companies
will do research on their users. This is especially common in HCI. Lots of companies are constantly
doing very interesting testing on their users with a lot of rapid AB experiments. There's a lot of
potential knowledge there, but at the same time much of what they do likely would not pass IRB if it
were university research. This actually came up recently with Facebook in a paper they published titled
experimental evidence of massive-scale emotional contagion through social networks. Basically,
Facebook wanted to see if they could predict what would make users happy or sad. And as a result they
tweaked the news feed for some users to test out their ideas. In other words, they tried to manipulate
their user's mood for experimental purposes.
People are still discussing whether or not Facebook's study on its impact on users' moods was ethical.
Facebook maintains that the study was consistent with its own data use policy, which constitutes
informed consent. Opponents argue that it doesn't. What do you think? If you think that this was
ethical, why do you think it was ethical? If you think that it was unethical, what could've made it
ethical?
If you said yes, there are several reasons you might have stated. You might agree that because the
terms of service covered it, it was technically ethical research. The users did agree to things like this.
If you said no, the reason you gave may have been that we know users are not aware of what's in terms
of use. We have plenty of studies that indicate that users really don't spend any time reading what
they're agreeing to. And while technically, it's true that they're still agreeing to it, what we're
interested in here are participants' rights. If we know that users aren't reading what they're agreeing to,
don't we have an ethical obligation to make sure they're aware before we go ahead with it.
That ties into the other issue. Users weren't notified that they were participants in an experiment. So
even though they technically agreed to it when they agreed to Facebook's terms of service, one could
argue the fact they weren't notified when the study was beginning and ending means that it wasn't
ethical research. I'm not going to give you a right or wrong answer to this. There's a very interesting
conversation to have about this. But what's most important here are the interesting questions that it
brings up. Especially in regard to companies doing human subjects research that doesn't have any over
sight from the federal government. If you agreed with these reasons why it wasn't ethical, what could
they have done to fix it? Maybe they could have separated out the consent process for research studies
from the rest of Facebook as a whole. Maybe they could have specifically requested that individual
users opt-in, and alert them when the study was done, but not tell them what's actually being
manipulated. And even if the original study was ethical, there were likely things that could have
reduced the backlash. At the same time, those things might have affected the results. These are the
tradeoffs that we deal with.
In a recent paper in the Washington and Lee Law Review, Molly Jackman and Lauri Kanerva, two
Facebook employees, explore exactly this issue.
Jackman and Kanerva specifically note that the ethical guidelines developed in the context of academia
do not always address some of the considerations of industry.
To do so, Facebook designed its own internal review process. In this case, Facebook's process is heavily
reliant on differing to a research area expert. And I don't say defer in a bad way. A key part of IRB is
experiments are reviewed by people with no incentive to permit the study if it isn't ethical.
In this lesson, we've talked about research ethics. Research ethics guide the human subject's research
we do to make sure we're respecting the rights of our participants. But it also make sure the data we're
gathering is good and useful. At every stage of our design life cycle, we want to keep respect for our
participants at the forefront of our thought. That means being wary of experimenting in ways that
might negatively affect users. That also means only asking users to dedicate their time to evaluating
interfaces that are well thought out. And that means respecting users' viewpoints and position in the
design process.
Introduction to Needfinding
[MUSIC] The first stage of the design life cycle is needfinding, or requirements gathering. This is the
stage where you go and try to find out what the user really needs. The biggest mistake that the designer
can make is jumping to the design process before understanding the user or the task. We want to
develop a deep understanding of the tasks they're trying to accomplish and why. As we do this, it's
important to try to come in with as few preconceived notions as possible.
We're going to start by defining some general questions we want to answer throughout the data
gathering process about who the user is, what they're doing, and what they need.
Then we'll talk about how to formalize the data we gather into a shareable model of the task and a list
of requirements for our ultimate interface. Note that each of these tools could get a lesson on its own
on how to do it, so we'll try to provide some additional resources to read further on the tools you
choose to use.
Before we start our need-finding exercises, we also want to enter with some understanding of the data
we want to gather. These are the questions we ultimately want to answer. That's not to say we should
be answering them every step of the way, but rather, we want to gather the data necessary to come to
a conclusion at the end. Now, there are lots of inventories of the types of data you could gather, but
here's one useful list. One, who are the users? What are their ages, genders, levels of expertise? Two,
where are the users? What is there environment? Number three, what is the context of the task?
What else is competing for users' attention? Four, what are their goals? What are they trying to
accomplish? Five, right now, what do they need? What are the physical objects? What information do
they need? What collaborators do they need? Six, what are their tasks? What are they doing
physically, cognitively, socially? And seven, what are the subtasks? How do they accomplish those
subtasks? When you're designing your need finding methods, each thing you do should match up with
one or more of these questions.
In order to do some real need finding, the first thing we need to do is identify the problem space.
Where is the task occurring, what else is going on, what are the user's explicit and implicit needs? We'll
talk about some of the methods for doing that in this lesson, but before we get into those methods, we
want to understand the scope of the space we're looking at.
Just as we want to get an idea of the physical space of the problem, we also want to get an idea of the
space of the user. In other words, we want to understand who we're designing for. That comes up a lot
when doing design alternatives and prototyping, but we also want to make sure to gather information
about the full range of users for whom we're designing. So let's take the example of designing an
audiobook app for people that exercise. Am I interested in audiobooks just for kids or for adults too?
Am I interested in experts who are exercising or novices at it? Am I interested in experts at listening to
audiobooks or am I interested in novices at that as well? Those are pretty key questions. They
differentiate whether I'm designing for business people who want to be able to exercise while reading,
or exercisers who want to be able to do something else while exercising. The task is similar for both but
the audience, their motivations and their needs are different. So, I need to identify these different
types of users and to read funny exercises on all of them. One of the most successful products of all
time succeeded because of the attention to user types. The Sony Walkman became such a dramatic
success because they identified different needs for different types of people, design their product in a
way that it met all those needs, but then they market it specifically to those different types of
individuals.
During needfinding, there are some significant considerations that need to be made to avoid biasing
your results. Let's go through five of these possible biases. Number one, confirmation bias,
confirmation bias is the phenomenon where we see what we want to see. We enter with some
preconceived ideas of what we'll see, and we only notice the things that confirm our prior beliefs. Try
to avoid this by specifically looking for signs that you're wrong, by testing your beliefs empirically, and
by involving multiple individuals in the need finding process. Number two, observer bias, when we're
interacting directly with users, we may subconsciously bias them. It might be more helpful, for
example, with users using the interface that we designed compared to the ones that other people
designed. On surveys, we might accidentally phrase questions in a way that elicits the answers that we
want to hear. Try to avoid this by separating experimenters with motives from the participants. By
heavily scripting interactions with users, and by having someone else review your interview scripts and
your surveys for leading questions. Number three, social desirability bias, people tend to be nice.
People want to help. If you're testing an interface and the participants know that you're the designer of
the interface, they'll want to say something nice about it to make you happy. But that gets in the way
of getting good data. Try to avoid this by hiding what the socially desirable response is by conducting
more naturalistic observations and by recording objective data. Number four, voluntary response bias,
studies have shown that people with stronger opinions are more likely to respond to optional surveys.
You can see this often in online store reviews. The most common responses are often fives and ones.
For us, that means if we perform quantitative analysis on surveys, we risk over sampling the more
extreme views. Avoid this by limiting how much of the survey content is shown to users before they
begin the survey and by confirming any conclusions with other methods. Number five, recall bias,
studies have also shown that people aren't always very good at recalling what they did, what they
thought, or how they felt during an activity they completed in the past. That can lead to misleading and
incorrect data. Try to avoid this by setting tasks in contexts by having users think out loud during
activities or conducting interviews during the activity itself. Now these biases can get a larger control
also by making sure to engage in multiple forms of need finding.
For certain tasks, a great way for us to understand the users need is to simply watch. A great way for
me to start understanding what it's like to need an audiobook app for exercising is to come somewhere
where people are exercising and just watch them exercise. This is called naturalistic observation,
observing people in their natural context. So I'm fortunate that I actually live across the street from a
park, so I can sit here in my rocking chair on my porch and just watch people exercising. Now, I want to
start with very specific observations and then generalize out to more abstract tasks. That way I'll
observe something called confirmation bias which is basically when you see what you want to see, so
what do I notice? Well, I notice that there's a lot of different types of exercisers. There are walkers,
joggers, runners I see some rollerbladers, I see some people doing yoga. I see a lot of people riding
bikes but the bikers seem to be broken into two different kinds of groups. I see a lot of people biking
kind of leisurely but I also see some bikers who are a little bit more strenuous about it. I'm also noticing
that while joggers might be able to stop and start pretty quickly, that's harder for someone riding a
bike. So I might want to avoid designs that force the user to pull out their phone a lot because that's
going to be dangerous and awkward for people riding bikes. Now I also see people exercising in groups
and also people exercising individually. For those people exercising in groups, I don't actually know if
they'd be interested in this. Listening to something might kind of defeat the purpose of exercising
together. So I'm going to have to note that down as a question I want to ask people later. I also see
that many people tend to stretch before and after exercising and I'm wondering if we can use that.
Then we can have some kind of starting and ending sequence for this, so that a single session is kind of
book ending by both stretching, and interacting with our app. Note that by just watching people
engage in the task of exercising, I'm gathering an enormous amount of information that might affect my
design. But note also, that while naturalistic observation is great, I'm limited ethically in what I can do.
I can't interact with users directly and I can't capture identifying information like videos and
photographs that's why I can't show you what I'm seeing out here. I'm also limited in that I don't know
anything about what those users are thinking. I don't know if the people working out in groups would
want to be able to listen to audiobooks while they're doing yoga. I don't know if bluetooth headsets
Here are five quick tips for doing naturalistic observation. Number one, take notes. Don’t just sit
around watching for a while. Be prepared to get a targeted information and observations about what
you see. Number two, start specific and then abstract. Write down the individual little actions you see
people doing before trying to interpret or summarize them. If you jump to summarizing too soon, you
risk tunnel vision. Number three, spread out your sessions. Rather than sitting somewhere for two
hours one day and then moving on, try to observe in shorter 10 to 15 minute sessions, several times.
You may find interesting different information, and your growing understanding and reflection on past
exercises will inform your future sessions. Number four, find a partner. Observe together with
someone else. Take your own notes and then compare them later so you can see if you all interpreted
the same scenarios or actions in the same way. Number five, look for questions. Naturalistic
observations should inform the questions you decide to ask participants in more targeted need-finding
exercises. You don't need to have all the answers based on observation alone. What you need is
questions to investigate further.
Sometimes it's not just enough to watch people engaging in a task. Sometimes we want to experience a
task for ourselves. So that's what I'm going to do. I listen to audiobooks a lot. I don't really exercise. I
should, but I don't. But I'm going to try this out. So I've got my audiobook queued up, I've got my mic
on so I can take notes as I run. So I'm going to go on a jog and see what I discover.
Now we do have to be careful here though. Remember you are not your user. When you're working as
a participant observer, you can avail useful insights, but you shouldn't over represent your own
experiences. You should use this experience as a participant observer to inform what you ask users
going forward
Let's zoom in a little bit more on what the user actually does or we can do naturalistic and participant
observation without having to directly interact much with our users. We need to get inside users heads
a little more to understand what they're thinking and doing. If you're trying to design interfaces to
make existing tasks easier, one way to research that is to look at the hacks that users presently employ.
How do they use interfaces in non-intended ways to accomplish tasks or how do they break out of the
interface to accomplish a task that could have been accomplished with an interface? If you're designing
a task meant to be performed at a desk like this, looking at the person's workspace can be a great way
of accomplishing this. So for example, I have six monitors around. And yet, you still see Post-It notes on
my computer. How could I possibly need more screen real estate? Well, Post-It notes can't be covered
up. They don't take away from the existing screen real estate. They're visible even when the computer
is off. So, this implicit notes here is the way to hack around the limitations of the computer interface.
Now when you're looking at hacks, it's important to not just look at what the user does and assume you
understand why. Look at their work around and ask them why they're using them. Find out why they
don't just use the interface that's currently in place. You might find they just don't know about them,
which presents a different kind of design challenge. Now hacks are related to another method we can
use to uncover user needs as well, which are called errors. Whereas hacks are ways users get around
the interface to accomplish their tasks, errors are slips or mistakes that users frequently make while
performing the task within the interface.
When we’re trying to make iterative improvements, one of the best places we can look is at the errors
users make with the tools they currently have available. We can fix those errors, but we can also use
those errors to understand a bit more about the user’s mental model. So, here’s a common example of
an error, for me, which is a slip. I keep my email open on this window over here. I’ll frequently forget
that it’s my active window while trying to type into a window over here. As a result, I’ll hit a bunch of
hotkeys in my email interface. I’ll tag random emails, delete random emails. It's just kind of a mess.
This is a slip because there’s nothing wrong with my mental model of how this works. I understand that
there's an active window and it's not selected. The problem is that I can easily forget which window is
active. Mistakes on the other hand, are places where my mental model is weak and for me a place
where that happens when I’m using my Mac. I’m used to a PC, where the maximize button always
makes a window take up the entire screen. I’ve actually never fully understood the maximize button on
a Mac. Sometimes it seems to work like a PC maximize button. Sometimes it just expands the window a
bit, but not to the entire screen. Sometimes it enters even like a full screen mode, hiding the top task
bar. I make mistakes there because I don’t have a strong mental model of how it works. So, if you were
watching me, you could see me making these errors, and you could ask me why I’m making them. Why
did I choose to do that if that was my goal. That works for both discovering hacks and discovering
errors: watch people performing their tasks, and ask them about why certain things happen the way
that they do.
If we're designing interfaces for particularly complex tasks, we might quickly find out that just talking to
our participants or observing them really isn't enough to get the understanding we need to design
those interfaces. For particularly complex tasks, we might need to become experts ourselves in order
to design those programs. This is informed by the domain of ethnography, which recommends
researching a community or a job or anything like that, by becoming a participant in it. It goes beyond
just participant observation though, it's really about integrating oneself into that area and becoming an
expert in it and learning about it as you go. So we bring in our expertise and design in HCI and use that
combined with the expertise that we develop to create new interfaces for those people. So for
example, our video editors here at Udacity have an incredible incredibly complex workflow involving
multiple programs, multiple workflows, lots of different people and lots of moving parts. There's no
possible way I could ever sit down with someone for just an hour and get a good enough picture of
what they do, to design a new interface that will help them out, I really need to train under them. I
really need to become an expert at video editing and recording myself, in order to help them out. It's
kind of like an apprenticeship approach. They would apprentice me in their field and I would use the
knowledge that I gain to design new interfaces to help them out. So ethnography and apprenticeship
are huge fields of research both on their own and as they apply to HCI. So if you're interested in using
that approach take a look at some of the resources that we're providing.
A most targeted way of gather information from users though is just to talk to them. One way of doing
that might be to bring them in for an interview. So I'm sitting here with Morgan, who's one of the
potential users for our audio book app targeted at exercisers. And we're especially interested in the
kinds of task you perform while exercising and listening to audio books at the same time. So to start,
what kind of challenges do you run into doing these two things at once?
>> I think the biggest challenge is that it's hard to control it. I have headphones that have a button on
them that can pause it and play. But if I want to do anything else I have to stop, pull up my phone and
unlock it just to rewind.
Yeah, that makes sense. Thank you. Interviews are useful ways to get at with the user is thinking when
they're engaging in a task. You can do interviews one on one like this or you can even do interviews in
a group with multiple users at the same time.
Here are five quick tips for conducting effective interviews. Now we recommend reading more about
this before you actually start interviewing people. But these should get you started. Number one.
Focus on the six W's when you're writing your questions. Who. What. Where. When. Why and how.
Try to avoid questions that lend themselves to one world or only yes or no answers, those are better
gathered via surveys. Use your interview questions to ask open-ended, semi-structured questions.
Number two. Be aware of bias. Look at how you're phrasing your questions and interactions, and make
sure you're not predisposing the participant to certain views. If you only smile when they say what you
want them to say, for example you risk biasing them to agree with you. Number three. Listen. Many
novice interviewers get caught up in having a conversation with a participant rather than gathering data
from the participant. Make sure the participant is doing the vast majority of the talking. And don't
reveal anything that might predispose them to agree with you. Number four. Organize the interview.
Make sure to have an introduction phase. Some lighter questions to build trust and a summary at the
end so the user understands the purpose of the questions. Be ready to push the interview forward or
pull it back on track. Number five. Practice. Practice your questions on friends, family, or research
partners in advance. Rehearse the entire interview, gathering subjects is tough so when you actually
have them you want to make sure to get the most out of them.
Interviews are likely to be one of the most common ways you gather data. So let's run through some
good and bad interview questions real quick. So here are six questions. Which of these would make
good interview questions? Mark the ones that would be good. For the ones that would be bad, briefly
brainstorm a way to rewrite the question to make it better. You can go ahead and skip forward to the
exercise if you don't want to listen to me read them out. Number one, do you exercise? Number two,
how often do you exercise? Number three, do you exercise for health or for pleasure? Number four,
what, if anything do you listen to while exercising? Number five, what device do you use to listen to
something while exercising? Number six, we're developing an app for listening to audio books while
exercising. Would that be interesting to you?
Personally, I think three of these are good questions. Do you exercise, is not a great question, because
it's kind of a yes or no question. How often do you exercise, is actually the better way of asking the
same question. It's subsumes all the answers to do you exercise, but leaves more room for elaboration
or more room for detail. Do you exercise for health or for pleasure, is not a great question, because it
Think-aloud protocols are similar to interviews in that we're asking users to talk about their perceptions
of the task. But with think-aloud, we're asking them to actually do so in the context of the task. So
instead of bringing Morgan in to answer some questions about listening to audiobooks while exercising,
I'll ask her to actually think out loud while listening to audiobooks and exercising. If this was a different
task like something on a computer, I could have her just come into my lab and work on it. But since this
is out in the world, what I might just do is give her a voice recorder to record her thoughts while she's
out running and listening. Now think aloud is very useful, because it can help us get at users thoughts
that they forget when they are no longer engaged in the task. But it's also a bit dangerous by asking
people to think aloud about their task, we encourage them to think about it more deliberately and that
can change the way they actually act. So while it's useful to get an understanding of what they are
thinking, we should check to see if there are places where what they do differs when thinking out loud
about it.
Most of the other methods for need finding, like observation, interviewing, apprenticeship, require a
significant amount of effort for what is often relatively little data. Or it's data from a small number of
users. We might spend an entire hour interviewing a single possible user or an hour observing a small
number of users in the world. The data we get from those interactions is deep and thorough, but
sometimes, we also want broader data. Sometimes, we just want to know how many people encounter
a certain difficulty or engage in a certain task. If we're designing an audio book app for exercisers, for
example. Maybe we just want to know how often those people exercises or maybe we want to know
what kind of books they listen to. At that point, a survey might be our more appropriate means of need
finding. Surveys let us get a much larger number of responses very quickly and the questions can be
phrased objectively, allowing for quicker interpretation. And plus, with the Internet, they can be
administered asynchronously for at a pretty low cost. A few weeks ago, for example, I came up with the
idea for a study on Friday morning. And with the great cooperation from our local IRB office, I was able
to send out the survey to potential participants less than 24 hours later and receive 150 responses
within a week. Now of course, the data I receive from that isn't nearly as thorough as what I would
receive from interviewing some of those participants. But it's a powerful way of getting a larger amount
of data. And it can be especially useful to decide what to ask participants during interviews or during
focus groups.
Survey design is a well documented art form. And in fact, designing surveys is very similar to designing
interfaces themselves. So many of the lessons we learn in our conversations apply here are well. Here
are five quick tips for designing and administering effective surveys. Number one, less is more. The
biggest mistake that I see novice survey designers make is to ask way too much. That affects the
response rate and the reliability of the data. Ask the minimum number of questions necessary to get
the data that you need and only ask questions that you know that you'll use. Number two. Be aware of
bias. Look at how you're phrasing the questions. Are there positive or negative connotations? Are
participants implicitly pressured to answer one way or the other? Number three. Tie them to the
inventory. Make sure every question on your survey connects to some of the data that you want to
gather. Start with the goals of the survey and then write the questions from there. Number four. Test
it out. Before sending it to real participants, have your co-workers or colleagues test out your survey.
Pretend they're real users and see if you would get the data you need from their responses. Number
five, iterate. Survey design is like interface design, test out your survey, see what works and what
doesn't and revise it accordingly. Give participants a chance to give feedback on the survey itself, so
that you can improve it for future iterations.
Writing survey questions is an art, as well as a science. So let's take a look at an intentionally poorly
designed survey, and see everything we can find that's wrong with it. So on the left is a survey. It's kind
of short, mostly because of screen real estate. Write down in the box on the right everything that is
wrong with this survey. Feel free to skip forward if you don't want to listen to me read out the
questions. On a scale of 1 to 4 with 1 meaning a lot and 4 meaning not at all, how much do you enjoy
exercising? Why do you like to exercise? On a scale of 1 to 6 with 1 meaning not at all and 6 meaning a
lot, how much do you like audiobooks? Have you listened to an audiobook this year?
Here are a few of the problems that I intentionally put into this survey. Some of them are kind of
obvious, but hopefully a couple others were a little bit more subtle and a little bit more interesting.
First when I say on a scale of one to four with one meaning a lot and four meaning not at all, what do
two and three mean exactly? It's not a very clear scale to just say the endpoint. Just giving the
endpoints doesn't give a very clear scale. We usually also want to provide an odd number of options, so
that users have kind of a neutral central option. Sometimes we'll want to force our participants to take
So far we've discussed some of the more common approaches that need finding. Depending on your
domain though, there might be some other things you can do. First, if you're designing for a task for
which interfaces already exist, you might start by critiquing the interfaces that already exist using some
of the evaluation methods that we'll cover later in the evaluation lesson. For example, if you're wanting
to design a new system for ordering takeout food, you might evaluate the interfaces of calling in an
order, ordering via mobile phone or ordering via a website.
Second and similarly, [SOUND] if you're trying to develop a tool to address a problem that people that
are already addressing, you might go look at user reviews and see what people already like and dislike
about existing products. For example, there are dozens of alarm clock apps out there, and thousands of
reviews. If you want to design a new one, you could start there to find out what people need or what
their common complaints are.
In this lesson we've covered a wide variety of different methods for need finding. Each method has its
own disadvantages and advantages. So let's start to wrap up the lesson by exploring this with an
exercise. Here are the methods we've covered, and here are the potential advantages. For each row,
for each advantage, mark which need-finding method actually has that advantage. Note that these
might be somewhat relative, so your answer may differ from ours. Go ahead and skip to the exercise if
you don't want to listen to me read these out. The columns from left to right are Naturalistic
Observation, Participant Observation, Errors and Hacks, Interviews, Surveys, Focus Groups,
Apprenticeship, and Think-Aloud. The potential advantages are Analyzes data that already exists,
Requires no recruitment, Requires no synchronous participation, Investigates the participant's
thoughts, occurs within the task context, and cheaply gathers lots of users' data.
Here's my answer to this very complicated exercise. Two methods that analyze data that already exists
are Naturalistic Observation and Errors and Hacks. Naturalistic Observation doesn't necessarily analyze
The needfinding exercises that we've gone through so far focus on the needs of the exercisers. What
can they do with their hands, what is the environment around them like while exercising, and so on?
However, that's only half the picture for this particular design. Our ultimate goal is to bring the
experience of consuming books to people that exercise, which means we also need to understand the
task of book-reading on its own. Now a problem space is still around exercisers, so we wouldn't go
through the entire design life cycle for book reading on its own. We don't need to design or prototype
anything for them. But if we're going to bring the full book reading experience to people while
exercising, we need to understand what that is. So take a moment and design an approach to
needfinding for people who are reading on their own.
We could apply pretty much every single need-finding method that we've discussed to this task. We
could, for example, go to the library and just watch people reading and see how they're taking notes.
We've all likely done it ourselves. We can reflect on what we do while reading, although again, we need
to be careful not to over-value our own priorities and approaches. Reading is common enough, that we
can easily find participants for interviews, surveys, think alouds. The challenge here will be deciding
who our users really are. Books are ubiquitous. Are we trying to cater to everyone who reads
deliberately? If so, we need to sample a wide range of users or initially, we could choose a subset. We
might cater to students who are studying or busy business people, or people that specifically walk or
bike to work. We might start with one of those groups and then abstract out over time. We might
eventually abstract all the way to just anyone who's unable to read and take notes the traditional way
like people driving cars or people with visual impairments but that's further down the road. The more
important thing is that we define who our user is, define the task in which we're interested, and
deliberately design for that user and that task throughout the design life cycle.
We've noted that design is a life cycle from needfinding to brainstorming design alternatives to
prototyping to evaluation. And then, back to needfinding to continue the cycle again.
Needfinding on its own though can be a cycle by itself. For example, we might use the results of our
naturalistic observation to inform the questions we asked during our interviews. For example, imagine
that we noticed that very many joggers, jog with only one earphone in. That's a naturalistic
observation, and then in an interview, we might ask, why do some of you jog with only one earphone
in? And we might get the answer from the interview that it's to listen for cars or listen for someone
trying to get their attention because they exercise in a busy area. Now that we understand why they
have that behavior, maybe we develop a survey to try and see how widespread that behavior is, and
ask, how many of you need to worry about what's around you when you're listening while driving? If
we notice in those surveys a significant split in the number of people who were concerned about that,
that might inform our next round of naturalistic observation. We might go out and look and see in what
So in that way all of the different kinds of need finding that we do can inform our next round of other
kinds of need finding. We can go through entire cycles just of need finding without ever going on to our
design alternatives or prototyping stages. However, the prototyping and evaluation that we do will
then become another input into this. During our evaluation we might discover things that will then
inform what we need to do next as far as need finding is concerned. Creating prototypes and
evaluating them gives us data on what works and what doesn't. And that might inform what we want
to observe to better understand the task going forward.
That's the reason why the output of evaluation is more needfinding. It would be a mistake to do one
initial needfinding stage, and then jump in to a back and forth cycle of prototyping and evaluation.
During these need-finding exercises, you'll have gathered an enormous amount of information about
your users. Ideally, you've combined different sets of these approaches. You've observed people
performing the tasks, you've asked them about their thought process, and you tried it some yourself.
Pay special attention to some of the places where the data seem to conflict. Are these cases where you
as the designer understand some elements of the task that the users don't? Or are these cases where
your expertise hasn't quite developed to the point of understanding the task? Once you've gone
through the data gathering process, it's time to revisit that inventory of things we wanted to gather
initially. One, who are the users? Two, where are the users? Three, what is the context of the task?
Four, what are their goals? Five, right now, what do they need? Six, what are their tasks? And seven,
what are the subtasks? Revisit these, with the results of your data gathering in mind.
Now that you have some understanding of the user’s needs, it’s time to try to formalize that into
something we can use in design. There are a number of different ways we can do this.
For example, maybe we create a step-by-step task outline of the user engaging in some task. We can
break those tasks down into sub-tasks as well, all the way down to the operator level.
We might further augment this with a diagram of the structural relationships amongst the components
in the system and how they interact. This might give us some information about how we get feedback
back to the user or how they interact with our interface in the first place.
Finally, the final step for need-finding is to define our requirements. These are the requirements that
our final interface must meet. They should be specific and evaluatable, and they can include some
components that are outside of users tasks, as well, as defined by the project requirements.
In terms of user tasks, we might have requirements for guarding functionality, what the interface can
actually do.
Cost, how much the final tool can actually cost, and so on. We'll use these to evaluate the interfaces we
develop, going forward.
How might need finding work in your chosen area of HCI? If you're looking at designing for some
technological innovation like augmented or virtual interactions, the initial phase might not actually be
that different. Your goal is to understand how people perform tasks right now without your interface.
So initially, you want to observe them in their naturalistic environment. Later though, you'll need to
start thinking about bringing participants to you to experience the devices first hand. If you're
interested in something like HCI for healthcare or education, you have a wealth of naturalistic
observation available to you. You might even have existing interfaces doing what you want to do. And
you can try to leverage those as part of your need finding exercises. Remember, no matter your area of
application, you want to start with real users. That might be observing them in the wild, talking to them
directly, or looking at data they've already generated.
Today, we've talked about need finding. Need finding is how you develop your understanding of the
needs of your user. What tasks are they completing? What are the context of those tasks? What else is
going on? What are they thinking during the task and what do they have to hold in working memory?
All these things feed into your understanding of our users needs.
Now we've discussed a number of different techniques to approach this, ranging from low intervention
to high intervention.
Working up we can try to look more closely at users areas to find errors or hacks, or peruse the data
that they're already generating.
Or we might choose to work alongside them, not just participating in the task independently, but
learning from them and developing expertise itself. Once you've gained a sufficient understanding, it's
time to move on to the next step, brainstorming design alternatives.
[MUSIC]. When we've developed a good understanding of the needs of our user, it's time to move on
to the second phase of the design life cycle, design alternatives. This is when we start to brainstorm
how to accomplish the task we've been investigating. The problem here is that design is very hard, it's
hard for a number of reasons. The number of choices we have to make, and things we have to control
is more expansive than ever before.
Are we designing for desktops, laptops, tablets, smart phones, smart watches, augmented reality,
virtual reality, 2D, 3D, gesture input, pen input, keyboard input, mouse input, voice input?
And then we'll chat about how to explore those ideas a bit further to figure out what you want to
actually pursue.
The biggest mistake that a designer can make is jumping straight to designing an interface without
understanding the users or understanding the task. The second biggest mistake though is settling on a
single design idea or a single genre of design ideas too early.
This can take on multiple forms. One form is staying too allegiant to existing designs or products. Take
the thermostat, for example, again. If you settled on tweaking the existing design of a thermostat you
would never invent the Nest. So if you're working on improving an existing interface, try to actually
distance yourself from the existing solutions, at least initially during the brainstorming session. But this
is also a problem if you're designing interfaces for new tasks as well.
The reason why this is such a common mistake, is that there's this natural tendency to think of it as a
waste of time to develop interfaces you're not going to end up using. You think you can get it done
faster just by picking one early on and sticking to it. But flushing out ideas for interfaces you don't end
up using isn't a waste of time, because by doing so you continue to learn more about the problem. The
experience of exploring those ideas that you leave behind will make you a better designer for the ideas
that you do choose to pursue. In all likelihood your ultimate design will be some combination of the
design alternatives that you explored earlier. So, take my security system for an example. There are
two ways of interacting with it, the key pad and the key chain. Two different designs that, in this
particular instance, integrated just fine. Different alternatives won't always integrate side by side this
When we talk about the problem we're solving here we define the problem space as disabling a security
system as we enter a home. We defined our problem as far as possible away from the current
interfaces for doing it. The design space on the other hand is the area in which we design our solutions
to this problem. The current design space for this problem is wall mounted devices and portable
devices like my key chain. But as we design, the space of possible ideas might expand. For example, as
we go along we might be interested in voice interfaces or interfaces with our mobile phones or
wearable devices. Our goal during the design alternative phase is to explore the possible design space.
We don't want to narrow down too early by sticking devices on walls or devices on keychains. We want
to brainstorm lots of possible approaches, and grow a large space of possible designs.
When you first start brainstorming, your goal is to generate a lot of ideas. These ideas can be very short,
very high-level, and very general. Your goal is just to generate an expanse of them. They don’t even
have to be ideas for interfaces: just any idea for solving the problem. If you look online, you’ll find
numerous great guides to how to brainstorm ideas.
One of the most interesting takeaways is that research generally indicates that it’s better to start with
individual brainstorming. That’s non-intuitive, though -- so often we hold meetings for brainstorming,
but it should start individually. That’s because brainstorming is most effective when it initially just
generates a lot of ideas, but groups tend to coalesce around ideas pretty early.
Think about how you’d design with different types of interactions, like gestures, voice, or touch. Think
about how you’d design for different interfaces, like smart watches, tablets, augmented reality. Think
about how you’d design for different audiences, novices and experts, kids and adults. Get silly with it,
some of the best ideas start as silly ideas. How would you design this for your dog, for your cat? How
would you design this for someone with three arms? With one arm? Go nuts. Your goal is to generate
a lot of ideas. These are going to get loaded into your mind and they’ll crop up in interesting ways
throughout the rest of the design process. That’s why it’s important to generate a lot: you never know
what will come up.
So, I'm going to demonstrate this for you real quick. I'm going to brainstorm ideas for our problem of
allowing exercisers to consume books and take notes. So, I have my paper for brainstorming, so please
enjoy this 30 minute video of me sitting here writing at a desk.
Here are five quick tips for effective individual brain storming. Number one, write down the core
problem. Keep this visible. You want to let your mind enter a divergent thinking mode but you also
want to remain grounded in the problem. Writing down the problem and keeping it available will help
you remain focused while remaining creative. Number two. Constrain yourself. Decide you want at
least on idea in a number of different categories. Personally, I try to make sure I have at least three
ideas that use nontraditional interaction methods, like touch and voice. You can constrain yourself in
strange ways too. Force yourself to think of solutions that are too expensive or not physically possible.
The act of thinking in these directions will help you out later. Number three. Aim for 20. Don't stop
until you have twenty ideas. These ideas don't have to be very well-formed or complex, they can be
simply one sentence descriptions of designs you might pursue. This forces you to think through the
problem, rather than getting tunnel vision on an early idea. Number 4. Take a break. You don't need
to come up with all of these at once and, in fact, you'll probably find it's easier if you leave and come
back. I'm not just talking about a ten minute break either. Stop brainstorming and decide to continue a
couple days later but be ready to write down new ideas that come to you. Number 5. Divide and
conquer. If you're dealing with a problem like helping kids lead healthier lifestyles, divide it into smaller
problems and brainstorm solutions to those. If we're designing audio books for exercises, for example,
we might divide it into things like the ability to take and review notes, or the ability to control playback
hands-free. Divide it like that and brainstorm solutions to each individual little problem.
Group brain storming presents some significant issues. Thompson in 2008 laid out four behaviors in
group brainstorming that can block progress. The first is social loafing. People often don't tend to work
as hard in groups as they would individually. It's easy to feel like the responsibility for unproductive
brainstorming is shared and deflected. In individual brainstorming, it's clearly on the individual.
The second blocker is conformity, people in groups tend to want to agree. Studies have shown that
group brainstorming leads to convergent thinking. The conversation the group has tends to force
participants down the same line of thinking, generating fewer and less varied ideas than the individuals
acting alone. During brainstorming, though, the goal is diversion thinking, lots of ideas, lots of
creativity.
The fourth blocker is performance matching. People tend to converge in terms of passion and
performance, which can lead to a loss of momentum over time. That might be able to get people
excited if they're around other excited people initially, but more often than not, it saps the energy of
those who enter with enthusiasm. In addition to these four challenges, I would add a fifth.
To have an effective group brainstorming session, we need to have some rules to govern the
individual's behavior to address those common challenges. In 1957, Osbourne outline four such rules.
Number one, Expressiveness. Any idea that comes to mind, share it out loud, no matter how strange.
Number two, nonevaluation. No criticizing ideas, no evaluating the ideas themselves yet.
Number four, building. While you shouldn't criticize other's ideas, you should absolutely try to build on
them. Then, in 1996, Oxley, Dzindolet and Paulus presented four additional rules.
Number two, no explaining ideas. Say the idea and move on. No justifying ideas.
Number four, encourage others. If someone isn't speaking up, encourage them to do so. Note that all
eight of these rules prescribe what individuals should do, but they're only effective if every individual
does them. So it's good to cover these rules, post them publicly, and call one another on breaking from
them.
The rules given by Osborn, Oxley, Dzindolet and Paulus are about helping individuals understand how
they should act in group brainstorming. Here are a few additional tips though that apply less to the
individual participants and more to the design of the activity as a whole. Number one, go through every
individual idea. Have participants perform individual brainstorming ahead of time and bring ideas to
the group brainstorming session. And explicitly make sure to go through each one. That will help avoid
converging our own idea too early and make sure everyone is heard. Number two, find the optimal
size. Social loafing occurs when there's a lack of individual responsibility. When you have so many
people that not everyone would get to talk anyway, it's easy for disengagement to occur. I'd say a
group brainstorming session should generally not involve more than five people. If more people need
to give perspectives than that, then you can have intermediate groups that then send ideas along to a
later group. Number three, set clear rules for communication. Get a 20 second timer and when
someone starts talking, start it. Once the timer is up, someone else gets to speak. The goal is to ensure
that no one can block others ideas by talking too much, whether intentionally or accidentally. Number
four, set clear expectations. Enthusiasm starts to wane when people are unsure how long a session will
go or what will mark its end. You might set the session to go a certain amount of time or dictate that a
certain number of ideas must get generated. No matter how you do it, make sure that people
participating can assess where in the brainstorming session they are. Number five. End with ideas, not
decisions. It's tempting to want to leave a brainstorming session with a single idea on which to move
forward, but that's not the goal. Your brainstorming session should end with several ideas. Then let
them percolate in everyone's mind before coming back and choosing the ideas to pursue later.
The brainstorming process should lead you to a list of bunch of high level general design alternatives.
These are likely just a few words or a sentence each but they described some very general idea of how
you might design the interface, to accomplish this task. Your next step is to try to flesh these ideas out
into three or four ideas that are worth taking forward to the prototyping stage. Some of the ideas you
might be able to dismiss pretty quickly, that's all right. You can't generate good ideas without
generating a lot of ideas. Even though you won't end up using all of them. In other places, you might
explore an idea a little before dismissing it or you might combine two ideas into a new idea. In the rest
of this lesson, we'll give you some thought experiments you can use to evaluate these ideas and decide
what to keep, what to combine and what to dismiss.
The first common method we can use to flush out design alternatives is called personas. With personas
we create actual characters to represent our users. So let's create a persona for the problem of helping
exercisers take notes while reading books. We'll start by giving her a name and a face, and then we'll fill
out some details. We want to understand who this persona is. We want to be able to mentally simulate
her. We want to be able to say, what would Anika do in this situation? What is she thinking when she's
about to go exercise? What kind of things might interrupt her? We might want to put in some more
domain specific information as well. Like, why does this person exercise? When do they exercise?
What kind of books do they like? How are they feeling when they're exercising? Where do they usually
exercise?
We want to create at least three or four of these different personas, and perhaps more depending on
how many different stakeholders we have for our problem. The important thing is that these should be
pretty different people, representing different elements of our designs, different elements of our task,
so we can explore its entire range. We don't want to design just for Anika, but we do want to design for
Personas are meant to give us a small number of characters that we can reason about empatheticly.
However, it can sometimes also be useful to formulaicly generate a large number of user profiles to
explore the full design space.
We can do this by defining a number of different variable about our users and the possibilities within
each. So here our few examples, we can ask ourselves, do we care about novice users or expert users
or both? Do we care about users that read casually, that read seriously, or both kinds of users? Do we
only want to cater to users that are highly motivated to use our app, which can make things a little bit
easier on us? Or do we want to assume that it won't take much to stop them from using our app? Can
we assume a pretty high-level of technological literacy, or are we trying to cater to more casual users as
well? And are we interested in users that are going to use our app all the time, or in users who are
going to use our app only occasionally, or both? All of these decisions present some interesting design
considerations that we need to keep in mind. For example, for users that are going to use our tool very
often, our major consideration is efficiency. We want to make sure they can do what they need to do
as quickly as possible. And oftentimes, that might be relying on them to know more about how to use
Building on the idea of a persona we can take that person and stretch her out over time and see what is
she thinking, what is she doing at various stages of interacting with our interface or interacting with the
task at hand. I've also heard this called journey maps although journey maps usually cover much longer
periods of time. They cover the entire expanse of the person's life, why they are interested in
something and where they are going from here. Timelines can be more narrowed to the specific time
which users are interacting with the task or with the program. So our goal is to take that persona and
stretched it out overtime.
So for our example, what sparks Anika to decide to exercise in the first place? That might be really
useful information to know.
Then what does she do? In this case, maybe she set ups her audiobooks as she actually pushes play,
puts her headphones in, and so on.
And then at the end, she turns off the audiobook. The usefulness of drawing this as a timeline is it starts
to let us ask some pretty interesting questions. What prompts this person to actually engage in the task
in the first place? What actions lead up to the task? How are they feeling at every stage of the task?
And can we use that? How would each design alternative impact their experience throughout this
process? For example, if our persona for Anika was that she really doesn't like to exercise but she
knows she really needs to, then we know her mood during this phase might be kind of glum. We need
to design our app with the understanding that she might have kind of low motivation to engage in this
at all. If our app is a little bit frustrating to use, then it might turn her off of exercising all together. On
the other hand, if Anika really likes to exercise, then maybe she's in a very good mood during this phase.
And if she likes exercising on its own, maybe she forgets to even set up the audio book at all. So then
we need to design our app with that in mind. We need to design it such that there's something built
into it that could maybe remind her that when she gets to a certain location, she meant to start her
audio book. So stretching this out as a timeline lets us explore not only who the user is, but also what
they're thinking and what they're feeling. And how what we design can integrate with a task that
they're participating in. Exploring our different design alternatives in this way allows us to start to
We can create general timelines or routine interactions with our design alternatives, but it's often even
more interesting to examine the specific scenarios users will encounter while using our interfaces.
Rather than outlining the whole course of interaction, scenarios let us discuss specific kinds of
interactions and events that we want to be able to handle. These are sometimes also referred to as
storyboards, sequences of diagrams or drawings that outline what happens in a particular scenario. The
difference between timelines, and storyboards, and scenarios is that timelines, in my experience at
least, tend to be pretty general. They're how a routine interaction with the interface, or a routine
interaction with a design alternative goes. Scenarios and storyboards are more specific. They're about a
particular person interacting in a particular way, with particular events coming up. So let's build one of
these scenarios.
Morgan is out jogging when a fire engine goes by. It's so loud that she misses about 30 seconds of the
book. How does she recover from this?
In our unit on principles, we talk about task analysis, including things like cognitive task analysis or
human information processor models. Performing those analysis as part of our need finding also gives
us a nice tool for exploring our design alternatives. Using this, we can start to look at how exactly the
goals, operators, methods, and selection rules of the Gomes model map up to the ideas of our design
alternatives. How does the user achieve each of their goals in each interface? How relatively easy are
the goals to achieve between the different design alternatives? With the results of our cognitive task
analysis, we can start to ask some deeper questions about what the user is keeping in mind as well.
Given what we know about things competing for our user's attention, what are the likelihoods that
each interface will work? In some ways, this is a similar process to using personas we outlined earlier,
but with a subtle difference. Personas are personal and meant to give us an empathetic view of the
user experience. User models are more objective and meant to give us a measurable and comparable
view of the user experience. So ideally, the result of this kind of analysis is we would be able to say that
the different alternatives have these different phases and these different phases have different
efficiencies or different speeds associated with them. So, we could start to say exactly how efficient one
design is compared to another.
In this lesson we've covered several different ways of developing design alternatives. Each method has
its advantages and disadvantages. So, let's start rap the lesson up by exploring this with an exercise.
Here are the methods that we've covered and here are some other potential advantages. For each row,
mark which of the different methods possesses that advantage. Note that these might be somewhat
relative, so your answer might differ from ours and that's perfectly fine.
So personally here's my answer. For me scenarios are really the only ones that include the entire task
context. You can make the case that personas and timelines do as well, but I tend to think that these
are a little bit too separated from the task to really include the context. Personas and user profiles, on
the other hand, do include the users context. They include plenty of information about who the user is,
why they're doing this task, and what their motivations are. You could make the argument as well, that
scenarios and timelines include the user's context. Because the way we describe them, they're
instances of the personas being stretched out over a scenario or over time. User profile probably do
So let's apply these techniques to some of the ideas that I came up with earlier. The first thing I might
do is go ahead and rule out any of the ideas that are just technologically unfeasible. Coming up with
those wasn't a waste of time because they're part of a nice broad free flow brainstorming process. But
skin based interfaces and augmented reality, probably are not on the table for the immediate future. I
might also rule out the options that are unfeasible for some more practical reasons. We might a small
team of developers, for example. So, a dedicated wearable device isn't really our expertise. Now the
one I might do next is create some timelines, covering a sequence of events in exercising to use to
explore these alternatives further. I might notice that the users I observed and talked with, valued
efficiency in getting started. They don't want to have to walk through a complex set up process every
time they start to exercise. I might also use my user persona's to explore the cognitive load of the users
in these different alternatives. They have a lot going on, between monitoring their exercise progress,
observing their environment, and listening to the book. So, I'm going to want to keep cognitive load
very low. Now granted, we always want to keep cognitive load pretty low, but in this case, the
competing tasks are significant enough, that I want to sacrifice features for simplicity, if it keeps that
cognitive load pretty manageable. Now based on these timelines and these personas, I would probably
end up here with three design alternatives that I want to explore. One is a traditional touch interface, a
smartphone app. Then unfortunately it means the user is going to have to pull out their phone
whenever they want to take a note. But if I can design it well enough that might not be an issue. I also
know that approach gets me a lot of flexibility so it's good to at least explore it. A second approach is
gestural interfaces. I know that people aren't usually holding their device while exercising. So it would
be great if they had someway of interacting without pulling out their phone. Gestures might let us do
that. Now in our gesture recognition is in its infancy, but we might be able delivered smart watched
technology or something like a Fitbit to support interaction via gestures. A third approach is a voice
interface, I know people generally aren't communicating verbally while exercising, so why not a voice
interface? That can even double as the note taking interface. So now that I have three design
alternatives that I'm interested in exploring, I would move on to the prototyping stage which is building
some version of these that I can test with real users.
Design alternatives are where you explore different ways to facilitate the user's task. If you've already
chosen to focus on a certain area of technology like wearable devices and just general interaction. Then
in some ways, you've put the cart before the horse. You've chosen your design before exploring the
problem. As a learning experience though, that's perfectly fine. It's fine to say, I want to explore
augmented reality and I'm going to find a task that lets me do that. You're still exploring wether or not
augmented reality is the right solution for that task. You're just altering the task, if not instead of
altering the design if not. For other domains, you might need to make sure to create persona's for
different stakeholders in healthcare, for instance. We would want to make sure any interface you
design takes into consideration nurses, doctors, patients, managers family members and more. So
you'd want to create personas for all those different types of people, as well and make sure to explore
scenarios that affect each stakeholder.
The goal of the design alternative stage of the design life cycle, is to generate lots of ideas, and then
synthesize those ideas into a handful worth exploring further.
So we started with some heuristics for generating lots and lots of ideas, through both individual and
group brainstorming.
Introduction to Prototyping
[MUSIC] So we've talked to our users. We've gathered some understanding of what they need. We've
created some ideas for how we might address their need and we've mentally simulated those different
alternatives.
Now it's time to start actually making things we can put in front of users. This is the prototyping stage.
Like brainstorming design alternatives, this involves looking at the different ideas available to us and
developing them a bit. But the major distinction is that in prototyping, we want to actually build things
we can put in front of users. But that doesn't mean building the entire interface before we ever even
So we'll start with low fidelity prototypes, things that can be assembled and revised very quickly for
rapid feedback from users.
Then we'll work our way towards higher fidelity prototypes, like wire frames or working versions of our
interface.
To discuss prototyping there are a variety of different terms and concepts we need to understand. For
the most part, these will apply to where in the prototyping timeline those concepts are used. In the
early prototyping, we're doing a very rapid revision on preliminary ideas. This happens on our first few
iterations through the design lifecycle. In late prototyping, we're putting the finishing touches on the
final design, or revising a design that's already live. This happens when we've already been through
several iterations of our design lifecycle. At the various phases, we'll generally use different types of
prototypes in evaluations. Now, note that everything I'm about to say is pretty general, there will be
lots of exceptions.
The first concept is representation, what is the prototype? Early on, we might be fine with just some
textural descriptions or some simple visuals that we've written up on a piece of paper. Later on though,
we'll want to make things more visual and maybe even more interactive. We only want to put the work
into developing the more complex type of prototypes once we vetted the ideas with prototypes that
are easier to build. So in a lot of ways, this is a spectrum of how easy prototypes are to build over time.
A verbal prototype is literally just a description, and I can change my description on the fly. A paper
prototype is drawn on paper, and similarly, I could ball up the paper, throw it away, and draw a new
version pretty quickly. But things like actual functional prototypes that really work, those take a good
bit of time. And so we only want to do those once we've already vetted that the ideas that we're going
to build actually have some value. You don't want to sink lots of months and lots of engineering
resources into building something that actually works. Only to find out there's some feedback you could
have gotten just based on a drawing on a piece of paper that would have told you that your idea wasn't
a very good one.
Now these different kinds of prototypes also lend themselves to different kinds of evaluation
structures. Low fidelity prototypes can fine for evaluating the relative function of an interface, whether
or not it can do what's it's designed to do. If a user looks at the interface can they figure out what
they're supposed to press? You can prototype that was just a drawing on a piece of paper, as opposed
to a real functional prototype. Things like wireframes can be useful in evaluating the relative readability
of the interface as well. However, to evaluate actual performance, like how long certain tasks take, or
what design leads to more purchases. We generally need a higher fidelity prototype, through more
iterations of the design lifecycle. So early on, we're really just evaluating whether or not our prototype
even has the potential to do what we want it to do. Can a user physically use it? Can they identify what
button to press and when? For that we need additional detail like font size and real screen layout. We
need a real prototype that looks the way the final interface will look, even if it doesn't work quite yet.
And then, to evaluate performance we really need a prototype that's working, or close to working, to
evaluate certain tasks.
So, these are four the main concepts behind prototyping. There are other questions we might ask
ourselves as well. Like whether we're prototyping iterative or revolutionary changes, and the extent to
which the prototype needs to be executable. But in many ways, those fall under these previous
concepts.
Prototyping is largely about the tradeoffs we have to make. Low fidelity prototypes like drawings are
easy to create and modify, but they aren't as effective for detailed comprehensive evaluations. High
fidelity prototypes, like actual working interfaces, can be used for detailed feedback and evaluation,
but they're difficult to actually put together.
So, our goal is to maximize these trade-offs. We want to start with easier, low fidelity prototypes to get
initial ideas, to evaluate big designs and big plans, and make sure we're on the right track.
Then as we go along, we can move to the higher fidelity prototypes that take more time assemble
because we have initial evidence that our designs are actually sound. It's really important here also, to
note that our prototypes are prototypes. They aren't complete interfaces. We've discussed the past
that designers often have a tendency to jump straight to designing rather than getting to know their
users. That's a big risk here as well because we're designing.
Here are five quick tips for effective prototyping. Number one keep prototypes easy to change, your
goal here is to enable to rapid revision and improvement. It's easy to make quick changes to something
on paper but it's harder to make it a code or physical prototypes. Number two, make it clear that it's a
prototype. If you make a prototype look too good users may focus on superficial elements like colors or
font. By letting your prototype look like a prototype you can help them focus on what you're ready to
test. Number three, be creative. Your goal is to get feedback. Do whatever it takes to get feedback.
Don't let the type of prototype you're designing constrain the type of feedback you can get. If you find
your current prototypes don't give you the right kind of feedback, find ones that do. Number 4,
evaluate risks. One of the biggest goals of prototyping is to minimize the time spent pursuing bad
designs by getting feedback on them early. How much would you lose if you found that users hate the
parts of your design that they haven't seen yet? Whenever that answer gets to be longer than a couple
hours, try to get feedback to make sure you're not wasting time. Number five, [SOUND] prototype for
feedback. The goal of a prototype is to get feedback. You could spend a lot of time focusing on details
like font selection and color choice, I know I do, but that's probably not the feedback you need when
you're exploring your big alternatives. Prototype for the kind of feedback you want to get.
At the very simplest, we have verbal prototypes. That means we're literally just describing the design
we have in mind to our user. That's probably the lowest fidelity prototype possible. It's literally just
telling the user the same thing we tell our co-designers. So it's extremely easy to do, although it can be
hard to do effectively. Social desirability bias is big here because it's difficult to describe our idea in a
way that allows the participant to feel comfortable disagreeing with us. So we need to make sure to ask
for specific and critical feedback. At the same time though, how do we really know that the user
understands the desire we're describing? We're working towards becoming experts in the areas in
which we're designing, and we don't want to fall victim to expert blind spot by assuming our
explanation makes sense to a novice. For that reason, analogies can be powerful tools for explaining
prototypes. Describe your interface in terms of other tools the user might already know about. So for
example, imagine I was pitching the idea of InstaCart, a grocery delivery company. I might've described
it like, it's like Uber for groceries. Uber is a service kind of like taxis, and InstaCart is kind of like a taxi
for groceries. That way of describing it in terms of an analogy to another interface can be a powerful
way of helping your participant understand your idea more quickly.
One step above just ‘describing our ideas’ to our user in a verbal prototype would be drawing them out.
This is what we call a paper prototype. We could do this for anything from designing an on-screen
interface to designing the placement of controls in a vehicle. Let’s go back to our example of designing a
way for exercisers to consume and take notes on audiobooks. Let’s imagine one of my design
alternatives was just for a very easy-to-use app so that the hassle of pulling out the phone isn’t too
distracting. I might start this process simply by drawing up a prototype on paper.
Now I have a paper prototype. Now, I can talk to my user and ask her thoughts on it. We’ll talk more
about the kinds of thoughts I’ll ask for when we discuss evaluation. But generally, I can say: hey
Morgan, what about this design?
And notice that she didn’t comment on color or font or anything like that not because I said "hey, I'm
ignoring font right now" but because it’s pretty obvious that I'm not caring about font right now. The
nature of the paper prototype is it tells the user what kind of feedback we're looking for. We're looking
for pretty basic layout information. Now because this is on paper, I can actually immediately revise my
prototype and incorporate things that she suggested.
Now, I have a revision based on her feedback, and I can ask: hey, how's that?
Now paper prototyping isn’t only useful for testing out just single interface designs. You can also do
some interaction with it. Watch.
Makes sense. After you press that what you're going to see is this screen for note-taking. What would
you do then.
Yeah! Exactly. And then it would start recording you and it would transcribe what you say and at the
end you would press stop to continue.
Interestingly, just doing this I've already noticed that there's really no reason to make her jump to a
separate note-taking screen when she want to take a note while listening. It should actually just be a
note-taking button here on the main screen.
So just like this I get to walk through some of the interaction with the app on paper and get some ideas
like that on how to improve it. This is also called card-based prototyping. The idea is each screen is on a
different card and I can quickly sub those cards in and out so we can simulate what it would be like to
use the real application. So, that way, I can prototype a decent amount of the interface’s interaction
with pretty low prototyping effort.
>> Neat.
Paper prototyping is great when we're designing flat interfaces for screens. But what if you're designing
a voice interface or a gesture interface? How do you prototype that? One way is called Wizard of Oz
prototyping. Wizard of Oz prototyping is named as a reference to the famous scene from the classic
movie of the same name.
The idea here is, that we, behind the curtain, do the things that the interface would do once it's actually
implemented. That way we can test out the interactions that we plan to design and see how well they'll
work.
So I have Morgan, and we're going to do a Wizard of Oz prototype for an interface that will allow
exercisers to consume and take notes on audiobooks. So, I'll start by briefly telling her how the interface
would work. Also, this interface is run by voice command. I'll simulate it on my phone. Also you'll say,
>> Pause.
>> Play.
I just realized actually I need a way for you stop the note when you're done. So say close note when
you're done taking your note.
All right, you can stop. [LAUGH]. Now based on this prototype, I can also ask for some feedback. So
Morgan do you think should automatically start playing again when you stop that bookmark? Or should
it, what should it do?
>> Well okay, so I think I'd actually like it to start playing five seconds back. Because I imagine saying,
note, is going to step over some of the content.
Yeah, that makes sense, and we can go through this and quickly test out different ideas for how the
interaction might work. In practice, Wizard of Oz prototypes can actually get very complex. You can
have entire programs that work by having a person supply the requested input at the right time. But as
a concept a Wizard of Oz prototype is a prototype where the user can interact authentically with the
system. While a human supplies the functionality that hasn't yet been implemented.
Paper prototyping involved drawing things simply on paper and it can be really for experimenting with
overall layouts, especially because it tends to be a lot easier to revise and tweak those pretty naturally.
After some feedback though, you'll likely want to start formalizing your designs a little bit more. One
way of doing this would be called wire framing. In wire framing we use some more detailed tools to
mock up what an interface might look like. For example, my paper prototype from earlier might
become a wire frame like this. This lets us experiment with some additional details like font size, colors,
and the challenges of screen real-estate. Now there are lots of tools out there for wire framing that
come equipped with built-in common widgets and layouts, but you can also do some rudimentary
wireframing in something as simple as PowerPoint.
Google drawings can be used the same way as well. So you don't need to get super fancy, although if
you do a lot of wire framing you'll probably want to find a more streamlined tool.
...and Axure. These are both targeted at professionals working in user interface or user experience
design. Especially on teams and collaborating with a lot of people.
...or InDesign might also be great options for you to use because you're already somewhat familiar with
those interfaces.
...and Frame Box. Those are great to use, especially if you're just getting started. And of course, these
are just the popular ones that I know about right now. There are almost certainly more out there that
I'm not familiar with and more will surely come available, so check with your classmates or colleagues
to see what they would recommend. I'm personally really excited to see what kind of prototyping
options emerge for areas like virtual reality and augmented reality, where you can't really prototype on
a 2D canvas like these.
Wire framing is great for prototyping on-screen interfaces, but again, what if you're working on
something more physical or three-dimensional? In that case, you might want to construct a physical
prototype. But let's be clear, it doesn't have to actually work. That's where a lot of designers get
tripped up. They think to get good feedback on a design they have to have a working version, but you
don't. There are lots of elements you can test without actually implementing anything. So let's go back
to our example of designing a way for exercisers to take notes on audio books.
One of my alternatives might be a Bluetooth device that synchronizes with the phone with buttons for
different interactions. The individual will hold this while exercising and interact by pressing different
buttons for play or pause or take a note. I've prototyped this just by taking my car's key fob, and we
could just say that, pretend this button does this and this button does this. It's probably not the exact
shape that I want, but it's pretty close. It's probably about the same size. And I can test the general idea
of pressing buttons while exercising with this. And I can actually do a lot of testing with this. I can tell
Morgan how it works and watch carefully to see if the buttons she presses are the right ones to
In this lesson we've covered various different methods for prototyping. Each method has its advantages
and disadvantages. So let's start to wrap up the lesson by exploring this with another exercise. Here are
the methods that we've covered and here are some of the potential advantages. For each row, mark
the column to which that advantage applies. Note that as always, these are somewhat relative, so your
answer might differ from ours.
So here are my answers. First when we're talking about things that are revisable during interaction,
we're talking about things that I as the experimenter can get feedback from my user and immediately
change my prototype. So if they say that that button label doesn't really makes sense, I can cross out
that button label and immediately change it. That makes sense for prototypes that are very low fidelity.
Verbal prototypes, I can immediately say okay, then let's make it the way you just described. Paper
prototypes or card prototypes, I could quickly erase or cross out something on my prototype and
change it. Wizard of Oz is similar. Since I'm running what's going on behind the scenes, I can just
At this point, there's a risk of a major misconception that we should cut off right now. We started with
need finding, then develop some design alternatives, and now we're prototyping. We've talked about
how prototyping follows a timeline to low fidelity to high fidelity prototypes, from early to late
prototyping. We might think that we move on to evaluation when we're done prototyping. That's not
the way the design life cycle works though. We go through this cycle several times for a single design
and a single prototype corresponds to a single iteration through the cycle. So we did some initial
needfinding, we brainstormed some alternatives, and we prototyped those alternatives on paper. We
don't jump straight from doing them on paper to doing them via wire framing or doing a functional
prototype. We take those prototypes and we use them for evaluation. We evaluate those paper
prototypes with real people. The results of that evaluation tell us if we need to go back and understand
the task even better. Those results help us reflect on our alternatives as a whole, maybe come up with
some new ones. Then, equipped with the results of that evaluation, that additional needfinding, and
that additional brainstorming, we return to the prototyping phase. If our prototype seemed to be pretty
successful and pretty sound, then maybe it's time to raise the fidelity of it. Maybe we take it from a
paper prototype and actually do some wire frames, or do a car prototype around the actual interaction.
If it wasn't very successful though, when we reach back here, we're going to do a different paper
prototype, or a different low fidelity prototype, and then go to evaluation again. Each time we develop
a new prototype we go through the same cycle again. Now that might sound extremely slow and
deliberate but we also go through this on a very different time scales too. So for example, after we've
gone through needfinding and designing alternative brainstorming, we can develop a paper prototype.
We give it to a user and get their evaluation. They say that they don't like it. We ask them why, we ask
them to describe what about their task isn't supported by that interface. That's in some ways another
needfinding stage. Then we brainstorm real quick how we could resolve that. Maybe we just do that
while we're sitting with that user and think it didn't support this element of what they described, but I
could add that pretty quickly just by making this button or this function more visible. Now we very
quickly have a new prototyping just by sketching out that addition to that paper prototype and now we
can do it again. This cycle could take one minute. We could take one prototype, put it in front of a user,
There's one other misconception that I've seen in some designers I've worked with that I feel is also
worth explicitly acknowledging. All your prototypes don't have to be at the same level, at the same
time. Take Facebook for example. Facebook is a complete app already implemented.
Imagine that Facebook wanted to redesign their status update box, which they've done pretty recently
and have probably done since I recorded this. Just because the interface is complete in other ways
doesn't mean that all future prototyping efforts need to be similarly high fidelity. They don't need to
implement an entire new status composition screen just to prototype it. They can prototype it in lower
fidelity with sketches, or wire frames, put that in front of users, get their feedback, before ever actually
implementing it into a functional prototype or a working part of the website. This applies particularly
strongly to the design of apps or programs with multiple functions.
If you're working in an application area that relies on traditional screens and input methods, your
prototyping process might be pretty straight forward. You'll go from paper prototypes, to wireframes,
to exploring iteratively more complete versions of the final interface. For a lot of emerging domains
though, you'll have to get somewhat creative with your prototyping. For things like to stroll or voice
interaction, you can likely use Wizard of Oz prototyping by having a human interpret the actions or
commands that will ultimately be interpreted by the computer. For augmented reality or wearable
devices though, you might have to get even more creative. So, take a second and brainstorm how you
might go about prototyping in your chosen field. Remember, your goal is to get feedback on your ideas
with the user early. What can you create that will get you to that feedback?
In this lesson, we've talked about several methods for prototyping. Our goal is to employ a lot of
methods to get feedback rapidly, and iterate quickly on our designs. Through that process, we can
work our way towards creating our ultimate interface. The main goal of this prototyping process has to
been to create designs we can evaluate with real users. We're obviously not going to deploy a hand
draw interface to real customers, its value is in its ability to get us feedback. That's what the entire
design life cycle has been leading towards, evaluation, evaluation our ideas, evaluating our prototypes,
evaluation our designs. That user evaluation is the key to user centered design. Focusing on user
evaluation insures that our focus is always on the user's needs and experiences.
So now that we've researched the user's needs, brainstormed some design alternatives, and created
some sharable prototypes, let's move on to actual evaluation.
Introduction to Evaluation
The heart of user-centered design is getting frequent feedback from the users. That's where evaluation
comes into play. Evaluation is where we take what we've designed and put it in front of users to get
their feedback. But just as different prototypes serve different functions at different stages of the
design process, so also our methods for evaluation need to match as well.
Early on, we want more qualitative feedback. We want to know what they like, what they don't like,
whether it's readable, whether it's understandable. Later on, we want to know if it's usable. Does it
actually minimize your workload? Is it intuitive? Is it easy to learn?
Along the way, we might also want to iterate even more quickly by predicting what the results of user
evaluation will be. The type of evaluation we employ is tightly related to where we are in our design
process. So in this lesson, we'll discuss the different methods for performing evaluation to get the
feedback we need when we need it.
There are a lot of ways to evaluate interfaces. So to organize our discussion of evaluation, I've broken
these into three categories. The first is qualitative evaluation. This is where we want to get qualitative
feedback from users. What do they like, what do they dislike, what's easy, what's hard. We'll get that
information through some methods very similar, in fact identical, to our methods for need finding.
The second is empirical evaluation. This is where we actually want to do some controlled experiments
and evaluate the results quantitatively. For that, we need many more participants, and we also want
to make sure we addressed the big qualitative feedback first.
Before we begin there is some vocabulary we need to cover to understand evaluation. These things
especially apply to the data that we gather during an evaluation. While they're particularly relevant for
gathering quantitative data. They're useful in discussing our other kinds of data as well.
The first term is reliability. Reliability refers to whether or not some assessment of some phenomenon
is consistent over time. So for example, Amanda, what time is it?
Amanda is a very reliable assessment of the time. Every time I ask, she gives me the same time. We
want that in an assessment measure. We want it to be reliable across multiple trials. Otherwise its
conclusions are random, and just not very useful.
So while Amanda was a reliable time keeper, she wasn't a very valid one. Her time wasn't correct, even
though it was consistent.
Validity is closely connected to a principle called generalizability. Generalizability is the extent to which
we can apply lessons we learned in our evaluation to broader audiences of people. So, for example, we
might find that the kinds of people that volunteer for usability studies have different preferences than
the regular user. So the conclusions we find in those volunteers might not be generalizable in
measuring what we want to measure.
But in this case, no one's really going to say that Amanda was wrong in saying that it was 1:30. She just
wasn't as precise. I could just accurately say it's 1:31 and 27 seconds, but that's probably more
precision than we need.
As we describe the different kinds of data we can gather during evaluation, keep these things in mind.
If we were to conduct the same procedure again, how likely is it that we'd get the same results? That's
reliability. How accurately does our data actually capture the real world phenomenon that we care
about? That's validity. To what extent can we apply these conclusions to people that weren't in the
In designing evaluations it's critical that we define what we're evaluating. Without that we generally
tend to bottom out in vague assessments about whether or not our users like our interface. So here are
five quick tips on what you might choose to evaluate. Number 1, efficiency. How long does it take users
to accomplish certain tasks? That's one of the classic metrics for evaluating interfaces. Can one
interface accomplish a task in fewer actions or in less time than another? You might test this with
predictive models or you might actually time users in completing these tasks. Still though, this paints a
pretty narrow picture of usability. Number 2, accuracy. How many errors do users commit while
accomplishing a task? That's typically a pretty empirical question, although we could address it
qualitatively as well. Ideally, we want an interface that reduces the number of errors a user commits
while performing a task. Both efficiency and accuracy, however, examine the narrow setting of an
expert user using an interface. So that brings us to our next metric. Number 3, learnability. Sitting a
user down in front of the interface, define some standard for expertise. How long does it take the user
to hit that level of expertise? Expertise here might range from performing a particular action to
something like creating an entire document. Number 4, memorabilty. Similar to learnibility,
memorability refers to the user's ability to remember how to use an interface over time. Imagine you
have a user learn an interface, then leave and come back a week later. How much do they remember?
Ideally, you want interfaces that need only be learned once, which means high memorability. Number
5, satisfaction. When we forget to look at our other metrics, we bottom out in a general notion of
satisfaction, but that doesn't mean it's unimportant. We do need to operationalize it though.
Experience is thing like users' enjoyment of the system, or the cognitive load experience while using the
system. To avoid social desirability bias, you might want to evaluate this in creative ways like, finding
out how many participants actually download an app they tested after the session is over. Regardless
of what you choose to evaluate, it's important that you very clearly articulate at the beginning what
you're evaluating, what data you're gathering, and what analysis you will use. These three things
should match up to address your research questions.
When we discussed prototyping, we talked about how over time the nature of our prototypes get
higher and higher fidelity. Something similar happens with evaluation. Over time, the evaluation
method we'll use will change.
Throughout most of our design process our evaluations are formative. Meaning their primary purpose is
to help us redesign and improve our interface. At the end, though, we might want to do something
more summative to conclude the design process, especially if we want to demonstrate that the new
interface is better. Formative evaluation is evaluation with the intention of improving the interface
going forward. Summative is with the intention of conclusively saying at the end what the difference
was. In reality, hopefully we never do summative evaluation. Hopefully our evaluations are always with
the purpose of revising our interface and making it better over time. But in practice, there might come
times when you need to demonstrate a very clear quantitative difference.
And because of this difference, our early evaluations tend to be more qualitative. Qualitative
evaluations tend to be more interpretative and informal. Their goal is to help us improve or understand
the task. Our later evaluations are likely more empirical, controlled, and formal. Their goal is to
demonstrate or assess change. So while formative evaluation and summative evaluation were the
purposes of our evaluations, qualitative evaluations and empirical evaluations are ways to actually fulfill
those purposes. Predictive evaluation is a little outside the spectrum, so we'll talk about that as well.
As far as this is concerned, predictive evaluations tend to be very similar to qualitative evaluations. They
Recall also that earlier we talked about the difference between qualitative and quantitative data. As
you've probably realized, if qualitative evaluation occurs early, an empirical evaluation occurs late. And
chances are, we're using qualitative data more early, and quantitative data more late. In reality,
qualitative data is really always useful to improve our interfaces, whereas quantitative data, while
always useful, really can only arise when we have pretty rigorous evaluations.
And then one last area we can look at is where the evaluation takes place. In a controlled lab setting or
actually out in the field. Generally when we're testing our early low fidelity interfaces, we probably
want to do it in a lab setting as opposed to out in the wild. We want to bring participants into our lab
and actually describe what we're going for, the rationale behind certain decisions, and get their
feedback. Later on we might want to do real field testing where we give users a somewhat working
prototype, or something resembling a working prototype. And they can actually reflect on it as they go
about their regular lives, participating in whatever task that interface is supposed to help with. This
helps us focus exclusively on the interface early on, and in transition to focusing on the interface in
context later. But of course we want to also think about the context early on. We could for example,
develop a very navigation app that works great when we test in our lab, because it demands a very high
cognitive load. But doesn't work at all out in the field because when participants are actually driving,
they can't spare that cognitive load to focus on our app. Now of course none of these are hard and fast
rules. We'll very likely often do qualitative evaluation late or maybe do some field testing early. But as
general principles, this is probably the order in which we want to think about our different evaluation
styles.
Regardless of the type of evaluation you're planning to perform, there's a series of steps to perform to
ensure that the evaluation is actually useful. First, we want to clearly define the task that we're
examining. Depending on your place in the design process this can be very large or very small. If we
were designing Facebook, it can be as simple as posting a status update, or as complicated as navigating
amongst and using several different pages. It could involve context and constraints like taking notes
while running, or looking up a restaurant address without touching the screen. Whatever it is, we want
to start by clearly identifying what task we're going to investigate. Second, we want to define our
performance measures. How are we going to evaluate the user's performance? Qualitatively, it could
be based on their spoken or written feedback about the experience. Quantitatively, we can measure
efficiency in certain activities or count the number of mistakes. Defining performance measures helps
us avoid confirmation bias. It makes sure we don't just pick out whatever observations or data confirm
our hypotheses, or say that we have a good interface. It forces us to look at it objectively. Third, we
develop the experiment. How will we find user's performance on the performance measures? If we're
looking qualitatively will we have them think out loud while they're using the tool? Or will we have
them do a survey after they're done? If we're looking quantitatively what will we measure, what will we
control, and what will we vary? This is also where we ask questions about whether our assessment
measures are reliable and valid. And whether the users we're testing are generalizable. Fourth, we
recruit the participants. As part of the ethics process, we make sure we're recruiting participants who
are aware of their rights and contributing willingly. Then fifth, we do the experiment. We have them
walk-through what we outline when we develop the experiment. Sixth, we analyze the data. We focus
on what the data tells us about our performance measures. It's important that we stay close to what we
outlined initially. It can be tempting to just look for whatever supports are design but we want to be
impartial. If we find some evidence that suggests our interface is good in ways we didn't anticipate, we
can always do a follow up experiment to test if we're right. Seventh, we summarize the data in a way
that informs our on-going design process. What did our data say was working? What could be
improved? How can we take the results of this experiment and use it to then revise our interface?
To put the prototype in front of users, we walked through this experimental method. We defined the
task, defined the performance measures, developed the experiment, recruited them, did the
experiment, analyzed our data and summarized our data. Based on the experience, we now have the
data necessary to develop a better understanding of the user's needs, to revisit our earlier design
alternatives and to either improve our prototypes by increasing their fidelity or by revising them based
on what we just learned. Regardless of whether we're doing qualitative, empirical, or predictive
evaluation, these steps remain largely the same. Those different types of evaluation just fill in the
experiment that we develop, and they inform our performance measure, data analysis, and summaries.
Qualitative evaluation involves getting qualitative feedback from the user. There are a lot of qualitative
questions we want to ask throughout the design process.
The methods we use for qualitative evaluation are very similar to the methods we used for need
finding. Interviews, think aloud protocols, focus groups, surveys, post event protocols. We use those
methods to get information about the task in the first place, and now we can use these techniques to
get feedback on how our prototype changes the task.
With qualitative research, we want to capture as much of the session as possible. Because things could
come up that we don't anticipate. And we'd like to look at them again later. So how do we do that?
One way is to actually record the session. The pros of recording a session are that it's automated, it's
comprehensive, and it's passive. Automated means that it runs automatically in the background.
Comprehensive means that it captures everything that happens during the session. And passive means
that it lets us focus on administering the session instead of capturing it. The cons though, are that it's
intrusive, it's difficult to analyze, and it's screenless. Intrusive means that many participants are
uncomfortable being videotaped. It creates oppression knowing that every question or every mistake is
going to captured and analyzed by researchers later. Video is also very difficult to analyze. It requires a
person to come later and watch every minute of video, usually several times, in order to code and pull
out what was actually relevant in that session. And video recording often has difficulty capturing
interactions on-screen. We can film what a person is doing on a keyboard or with a mouse, but it is
difficult to then see how that translates to on-screen actions. Now some of these issues can be
resolved, of course. We can do video capture on the screen synchronize it with a video recording. But if
we're dealing with children, or at risk populations, or with some delicate subject matter, the
intrusiveness can be overwhelming. And if we want to do a lot of complex sessions, the difficulty in
analyzing that data can also be overwhelming. For my dissertation work I captured about 200 hours of
video, and that's probably why it took me an extra year to graduate. It takes a lot of time to go through
all that video.
So in selecting a way to capture your qualitative evaluation, ask yourself, will the subjects find being
captured on camera intrusive? Do I need to capture what happens on screen? How difficult will this
data be to analyze? It's tempting, especially for novices, to focus on just capturing as much as possible
during the session. But during the session is when you can capture data in a way that's going to make
Here are five quick tips for conducting successful evaluations. Number one, run pilot studies. Recruiting
participants is hard. You want to make sure that once you start working with real users, you're ready to
gather really useful data. So try your experiment with friends or family or coworkers before trying it out
with real users to iron out the kinks in your designs and your directions. Number two, focus on
feedback. It's tempting in qualitative evaluations to spend too much time trying to teach this one user.
If the user criticizes an element of the prototype, you don't need to explain to them the rationale. Your
goal is to get feedback to design the next interface, not to just teach this one current user. Number
three, use questions when users get stuck. That way, you get some information on why they're stuck
and what they're thinking. Those questions can also be used to guide users to how they should use it,
to make the session seem less instructional. Number four, tell users what to do, but not how to do it.
This doesn't always apply, but most often we want to design interfaces that users can use without any
real instruction whatsoever. So when performing qualitative evaluation, give them instruction on what
to accomplish, but let them try to figure out how to do it. If they try to do it differently than what you
expect, then you know how to design the next interface. Number five, capture satisfaction. Sometimes
we can get so distracted by whether or not users can use our interface that we forget to ask them
whether or not they like using our interface. So make sure to capture user satisfaction in your
qualitative evaluation.
In empirical evaluation, we’re trying to evaluate something formal, and most often that means
something numeric. It could be something explicitly numeric, like what layout of buttons leads to more
purchases or what gestures are most efficient to use. There could also be some interpretation involved,
though, like counting errors or coding survey responses.
The overall goal, though, is to come to something verifiable and conclusive. In industry this is often
useful in comparing designs or in demonstrating improvement. In research, though, this is even more
important because this is how we build new theories of how people think when they’re using interfaces.
If we wanted to prove that gestural interaction has a tougher learning curve than voice interaction or
that an audio interface is just as usable as a visual one, we would need to do empirical evaluation
between the interfaces.
Most empirical evaluations are going to be comparisons. We can do quantitative analysis without doing
comparisons, but it usually isn’t necessary. The biggest benefit of quantitative analysis is its ability to
When we do qualitative evaluations, we effectively would just bring in participants one at a time or in
groups, go through an interview protocol or script, and move along. Empirical evaluation is different,
though.
Here, we have multiple conditions, which we call treatments. These treatments could be different
interfaces, different designs, different colors, whatever we’re interested in investigating. Our goal here
is to investigate the comparison between the treatments, and end up with a conclusion about how
they’re different. However, we have to be careful to make sure that the differences we observe are
really due to the differences between the treatments, not due to other factors.
To make a judgment about the color, we need to make sure the color is the only thing we’re comparing.
Of course, here this sounds silly. In practice, though, differences can be more subtle. If you were testing
different layouts, you might miss that one loads a bit faster, or one uses prettier images. And that could
actually account for any differences you might observe.
We split the randomly participants into two groups, and one-by-one, we have them go through their
treatment. At the end, we have the data from participants in one group to compare to data from
participants in the other group..
There’s a second option, though. We can also do a within-subjects experiment. With a within-subjects
experiment, each participant participates in both treatments. However, a major lurking variable could
potentially be which treatment each participant sees first, so we still have to randomly assign
participants to treatment groups.
But instead of assigning participants to which treatment they’re receiving, we’re randomly assigning
them to what order they’ll receive the treatments in.
Imagine if all the smartest participants, or all the women were all placed in one group and received the
same treatment. That would clearly affect our results. So, we randomly assign people to groups. That
might also sound obvious, but imagine if your treatment involved a lot of physical set-up. It would be
tempting to run the first eight participants on one set-up, and the second eight on the other. But what if
that means the more punctual participants were in the first condition? Or what if you got better at
administering the experiment during the first condition, so that participants in the second condition had
a generally smoother experience? All of these are lurking variables that are controlled in part by
random assignment to groups.
Let’s pretend this is a reaction time study because that gives us nice numeric data. We’re curious, what
color should we use to alert a driver in a car that they’re starting to leave their lane? We run half of
them with red, half them with green, in a between-subjectsdesign.
As a result, they’ve generated some data. These are each participant's reaction time. Our goal is to
compare this data and decide which is better. So how do we do that?
The process for doing this rigorously is called hypothesis testing. Whenever we're trying to prove
something, we initially hypothesize that the opposite is true. So if we're trying to prove that one of
these options is better than the other, we initially hypothesize that actually the two things are equal.
That’s the null hypothesis. It's the hypothesis that we accept if we can't find sufficient data to support
the alterative hypothesis. So we want to see if this difference is big enough to accept the alternative
hypothesis instead of the null hypothesis. We generally accept the alternative hypothesis if there is less
than a 5% chance that the difference could have arisen by random chance. In that case, we say that the
difference is “statistically significant”. So here, there’s probably a pretty good chance that this
difference could arise by random chance, not because orange is actually any better than green.
In hypothesis testing, we test the hypothesis that differences or trends in some data incurred simply by
chance. If there are unlikely to have occurred by chance, we conclude that a difference really does
exist. How we do that test, though, differs based on the kind of data we have. Now we won't cover
how to actually do these tests in this video but we'll cover how to decide what tests you need to use
and provide links to additional information separately.
In its simplest form, we compare two sets of data like this using something called a two sample t-test. A
two sample t-test compares means of two unpaired sets of data. So we use it for between-subjects
testing with two sets of continuous data. Continuous data means each of these measurements are on a
continuous scale. And any number within that scale is hypothetically possible. We contrast this with
something like a discrete scale, where participants are asked to rate something on a scale of one to five
and have to choose a whole number. That will not be continuous data. It's important we only use t-
tests when we have continuous data. We have other methods for evaluating discrete data.
The math we're using changes a little bit if we're using proportions instead of just measurements. So
imagine if instead of just measuring a reaction time, we were instead measuring what percentage of the
time each participant reacted to the stimuli within a certain time threshold. So, for example, maybe
91% of the time this participant reacted in less than four-tenths of a second. It might seem like we
evaluate this the same way, but because of the difference in math behind proportions compared to
measurements, we have to use a slightly different test. We use a two-sample binomial test. Binomial
means that on a give trial, there are only two possible results. So, in this case, the two possible results
would be either reacted in less than four-tenths of a second or didn't. A two-sample binomial test
compares proportions from two sets of data and it's used for between-subjectstesting with two sets of
nominal data. Nominal carries a similar meaning to binomial in this context.
Now everything we've talked about so far has been about comparing two sets of data. Either within
participants or between participants. What if you wanted to compare three, though? What if we tried
an orange color, a green color, and a black color for those alerts? You might be tempted to compare
orange and green, green and black, and orange and black, in three separate comparisons. The problem
is that each time we do an additional comparison, we're raising the chance for a false positive.
This is the origin of the phrase, if you torture the data long enough, eventually it will talk. You can get
any data to say anything but that doesn't mean what you're getting it to say is true. So for comparing
between more than two treatments, we use something called an ANOVA, an Analysis of Variance. It
compares the means from several sets of data and we use it for between-subjects testing with three or
more sets of continuous data. The purpose of this is to control for those false positives. In order for
ANOVA to conclude that there really is a difference, there needs to be a much more profound
difference between treatments. So if you're comparing more than two samples or more than two
treatments, you'll want to use an ANOVA instead of just the t-test.
These are just the most common tests you'll see in HCI, in my experience, at least. But there are lots of
others. If you find yourself doing something that goes outside these basic tests, let us know and we'll
try to find the appropriate analysis for you. We'll also put some links to some resources on common
tests used in HCI in the notes below.
Here are five quick tips for doing empirical evaluations. You can actually take entire classes on doing
empirical evaluations, but these tips should get you started. Number 1, control what you can,
document what you can't. Try to make your treatments as identical as possible. However, if there are
systematical differences between them, document and report that. Number 2, limit your variables. It
can be tempting to try vary lots of different things and monitor lots of other things. But that just leads
to noisy, difficult data that will probably generate some false conclusions. Instead, focus on varying
only one or two things and monitor only a handful of things in response. There's nothing at all wrong
with only modifying one variable, and only monitoring one variable. Number 3, work backwards in
designing your experiment. A common mistake that I've seen is just to gather a bunch of data and
figure out how to analyze it later. That's messy, and it doesn't lead to very reliable conclusions. Decide
at the start what question you want to answer, then decide the analysis you need to use, and then
decide the data that you need to gather. Number 4, script your analyses in advance. Ronald Coase once
said, if you torture the data long enough, nature will always confess. What the quote means is if we
analyze and reanalyze data enough times, we can always find conclusions. But that doesn't mean that
they're actually there. So decide in advance what analysis you'll do, and do it. If it doesn't give you the
results that you want, don't just keep reanalyzing that same data until it does. Number 5, pay attention
to power. Power refers to the size of a difference that a test can detect. And generally it's very
dependent of how many participants you have. If you want to detect only a small effect, then you'll
need a lot of participants. If you only care about detecting a big effect, you can usually get by with
fewer.
Predictive evaluation is evaluation we can do without actual users. Now, in user centered design that's
not ideal, but predictive evaluation can be more efficient and accessible than actual user evaluation. So
it's all right to use it as part of a rapid feedback process. It lets keep the user in mind, even we we're
not bringing users into the conversation. The important thing is to make sure we're using it
appropriately. Predictive evaluation shouldn't be used where we could be doing qualitative or empirical
evaluation. It should only be used where we wouldn't otherwise be doing any evaluation. Effectively,
it's better than nothing.
When we talk about design principles, we talk about several heuristics and guidelines we use in
designing interfaces. The first method for predictive evaluation is simply to hand our interface and
these guidelines to a few experts to evaluate. This is called heuristic evaluation. Each individual
evaluator inspects the individual alone and identifies places where the interface violates some heuristic.
We might sit with an expert while they perform the evaluation or they might generate a report.
Heuristics are useful because they give us small snapshots into the way people might think about our
interfaces. If we take these heuristics to an extreme, though, we could go so far as to develop models
of the way people think about our interfaces.
During our needfinding exercises, we developed models of our users' tasks. In model-based evaluation,
we take these models and trace through it in the context of the interface that we designed.
For example, imagine if we identified this model as applying to users with low motivation to use this
interface. Maybe it's people doing purchases that they have to do for work, as opposed to just
shopping at their leisure. We can use that to informal evaluation of whether or not the interface relies
on high user motivation. If we find that the interface requires users to be more personally driven or to
keep more in working memory, then we might find that the users will fail if they don't have high
motivation to use the interface. And then we can revise it accordingly.
More recently, work has been done to create even more human-like models of users. Like some work
done by the Human Centered Design Group at the Institute for Information Technology in Germany.
Developing that agent is an enormous task on its own. But if we're working on big long-term project like
Facebook, or in a high-stakes environment like air traffic control, having a simulation of a human that
we can run hundreds of thousands of times on different interface prototypes would be extremely
useful.
The most common type of predictive evaluation you’ll encounter most likely is the cognitive
walkthrough. In a cognitive walkthrough, we step through the process of interacting with an interface,
mentally simulating at each stage what the user is seeing, thinking, and doing. To do this, we start by
constructing specific tasks that can be completed within our prototype. So I’m going to try this with the
card prototype that I used with Morgan earlier. I start with some goal in mind. So, right now my goal is
to leave a note. I look at the interface and try to imagine myself as a novice user. Will they know what
to do? Here, there’s a button that says “View and take a Note”, so I infer that’s what they would decide
to do. I tap that, and what is the response that I get? The system pauses playback, and it gives me this
note-taking screen. I go through the system like this, predicting what actions the user will take and
noting the response the system will give. At every stage of the process, I want to investigate this from
the perspective of the gulfs of execution and evaluation. Is it reasonable to expect the user to cross the
gulf of execution? Is the right action sufficiently obvious? Is the response to the action the one the user
would expect? On the other side, is it reasonable to expect the feedback to cross the gulf of evaluation?
Does the feedback show the user what happened? Does the feedback confirm the user chose the right
action? The weakness of cognitive walkthroughs is that we’re the designers, so it likely seems to us that
the design is fine. After all, we designed it. But if you can sufficiently put yourself in the user’s shoes,
you can start to uncover some really useful takeaways. Here, for example, from this cognitive
walkthrough, I’ve noticed that there isn’t sufficient feedback when the user has finished leaving a note.
The system just stops recording and resumes playback, which doesn’t confirm that the note is received.
And right now that might be a minor issue since there’s implicit feedback: the only way playback
resumes is if the note is received. But I’m also now realizing that it’s quite likely that users might start to
leave notes, then decide to cancel them. So, they both need a cancel option, and they need feedback to
indicate whether the note was completed and saved or canceled. I got that feedback just out of a
cognitive walkthrough of the interface as is. So, if you can put yourself in a novice’s shoes enough, you
can find some really good feedback without involving real users.
When we discussed prototypes for our design for an audiobook tool for exercisers, we briefly showed
the evaluation stage with Morgan actually using it. Let’s look at that in a little more depth, though.
What were we evaluating? At any stage of the process, we could have been performing qualitative
evaluation. We asked Morgan how easy or hard things were to do, how much she enjoyed using the
interface, and what her thought process was in interacting with certain prototypes. We could have also
performed some quantitative analysis. When she used the card-based prototype, for example, we could
have measured the amount of time it took her to decide what to do, or counted the number of errors
she committed. We could do the same kind of thing in the Wizard of Oz prototype. We could call out to
Morgan commands like “Press Play” and “Place a Bookmark” and see how long it takes her to execute
the command or how many errors she commits along the way. Between opportunities to work with
Morgan, we might also use some predictive evaluation to ensure we keep her in mind while designing.
Our goal is to apply multiple evaluation techniques to constantly center our designs around the user.
That’s why evaluation is a foundation of user-centered design -- just like we wanted to understand the
user and the task before beginning to design, we also want to understand how the user relates to the
design at every stage of the design life cycle.
In this lesson, we've covered three different types of evaluation. Qualitative, empirical, and predictive.
Each method has its advantages and disadvantages. Let's start to wrap this lesson up by exploring those
advantages with an exercise. Here are the methods that we've covered. And here are some potential
advantages. For each row, mark the column to which that advantage applies. Note that again, these
might be somewhat relative, so your answer will probably differ a bit from ours. You can go ahead and
skip to the exercise if you don't want to hear me read these. Our advantages are, does not require any
actual users, identifies provable advantages. Informs ongoing design decisions, investigates the
participants thought process. Provides generalizable conclusions, and draws conclusions from actual
participants.
Here would be my answers to this exercise. These are a little bit more objective than some of our
exercises in the past. First, if it does not require any actual users, predictive evaluation is the only
evaluation we can do without involving users in the evaluation process. That's both its biggest strength
To succeed in HCI, you need a good evaluation plan. In industries like healthcare and education, that's
initially going to involve getting some time with experts outside the real context of the task. That's
bringing in doctors, bringing in nurses, bringing in patients, and exploring their thoughts on the
prototypes that you've designed. In some places like education, you might be able to evaluate with real
users even before the interface is ready. But in others, like healthcare, the stakes are high enough that
you'll only want real users using the interface when you're certain of its effectiveness and reliability. In
some emerging areas, you'll be fighting multiple questions in evaluation. Take virtual reality for
example. Most people you encounter haven't used virtual reality before, there's going to be a learning
curve. How are you going to determine whether the learning curve is acceptable or not? If the user
runs into difficulties, how can you tell if those come from your interface, or if they're part of the
fundamental VR learning experience? So take a moment to brainstorm your evaluation approach for
your chosen application area. What kinds of evaluations would you choose, and why?
In this lesson we've discussed the basics of evaluation. Evaluation is a massive topic to cover though.
You could take entire classes on evaluation. Heck, you could take entire classes only on specific types of
evaluation.
Our goal here has been to give you enough information to know what to look into further, and when.
We want you to understand when to use qualitative evaluation, when to use empirical evaluation, and
when to use predictive evaluation. We want you to understand, within those categories, what the
different options are. That way, when you're ready to begin evaluation, you know what you should look
into doing.
The content we've covered so far was developed over the course of several decades of research in HCI
and human factors, and it's all still applicable today as well. At the same time, new technologies and
new eras call for new principles and new workflows. And specifically, the advent of the Internet
ushered in new methods for HCI.
Many software developers now adopt an Agile workflow which emphasizes earlier delivery, more
continuous improvement, and rapid feedback cycles.
Where did these changes come from? We can think of them in terms of some of the costs associated
with elements of the design life cycle. Think back to before the age of the internet. Developing
software was very expensive, it required a very specialized skill set. Software distribution was done the
same way we sold coffee mugs or bananas. You'd go to the store, and you'd physically buy the
software. That distribution method was expensive as well. And if you ship software that was hard to
use, the cost of fixing it was enormous. You had to mail each individual person an update disk. And
then the only way to get user feedback, or even to find out if it was usable, was the same way you
would do it before distribution, by having users come in for testing. All this meant there was an
enormous need to get it right the first time. If you didn't, it would be difficult to fix the actual software,
difficult to get the fix to users, and difficult to find out that a fix was even needed. Shigeru Miyamoto,
the creator of Nintendo's best video game franchises, described this in terms of video games by saying,
a delayed game is eventually good, but a rushed game is forever bad. The same applied to software.
Fast forward to now though, is that still true? Development isn't cheap now, but it is cheaper than it
used to be. A single person can develop in a day, what would've taken a team of people months to do
20 years ago, thanks to advances in hardware, programming languages, and the available libraries. You
can look at all the imitators of popular games on either the Android or the iPhone App Store to quickly
Tesla, for example, regularly pushes software updates to its cars via the internet.
And in the video game industry, day one patches that fix glitches on the very first day of release have
pretty much become the standard.
Now make no mistake, this isn't justification to show throw out the entire design life cycle. The
majority of design and research still goes through with a longer process. You need several iterations
through the full design life cycle for big websites, complex apps, anything involving designing hardware.
Anything involving a high profile first impression. And really anything involving anything even somewhat
high in stakes. But that said, there exists a new niche for rapid development. Maybe you came up with
an idea for a simple Android game. In the time it would take you to go through this longer process, you
could probably implement the game, and get it in front of real users, and get a lot more feedback.
That's what we're discussing here. How do you take the principles we've covered so far and apply them
to a rapid, agile development process?
Before I describe the current ideas behind when to go for an Agile development process, let’s see what
you think. Here are six possible applications we might develop. Which of these would lend itself to an
agile development process?
Here would be my answers. The two areas that I think are good candidates for an agile development
process are the two that use existing devices and don't have high stakes associated with them. In both
these cases, rolling out updates wouldn't be terribly difficult, and we haven't lost a whole lot by initially
having a product that has some bugs in it. A camera interface for aiding MOOC recording would be a
good candidate, if the camera environment was easier to program for, but programming for a camera
isn't like programming for an app store, or for a desktop environment. I actually don't even know how
you go about it. So for us, a camera interface for aiding MOOC recording probably wouldn't be a great
candidate, because we don't have access to that platform. And remember, our goal is to get products
in front of real users as soon as possible. Now of course, that all changes if we're actually working for a
camera company and we do have access to that platform. The second one is more fundamental though.
So when should you consider using these more agile methodologies? Lots of software development
theorists have explored this space. Boehm and Turner specifically suggest that agile development can
only be used in certain circumstances. First, they say, it must be an environment with low criticality. By
it's nature, agile development means letting the users do some of the testing. So you don't want to use
it in environments where bugs or poor usability are going to lead to major repercussions. Healthcare or
financial investing wouldn't be great places for agile development, generally speaking. Although there
have been efforts to create standards that would allow the methodology to apply, without
compromising security and safety. But for things like smartphone games and social media apps, the
criticality is sufficiently low. Second, it should really be a place where requirements change often. One
of the benefits of an agile process is they allow teams to adjust quickly to changing expectations or
needs. A thermostat, for example, doesn't change its requirements very often. A site like Udacity
though, is constantly adjusting to new student interests or student needs. Now these two components
apply to the types of problems we're working on. If we're working on an interface that would lend itself
to a more agile process, we also must set up the team to work well within an agile process. That means
small teams that are comfortable with change. As opposed to large teams that thrive on order. So
generally, agile processes can be good in some cases with the right people, but poor in many others.
In 2006, Stephanie Chamberlain, Helen Sharp, and Neil Maiden investigated the conflicts and
opportunities of applying agile development to user-centered design. They found interestingly that the
two actually had a signficant overlap.
Both agile development and user-centered design emphasized iterative development processes
building on feedback from previous rounds. That's the entire design life cycle that we've talked about.
That's at the core of both agile development and user-centered design.
And both also emphasize the importance of team coherence. So it seems that agile methods and user-
centered design agree on the most fundamental element, the importance of the user.
As a result, the authors advocate five principles for integrating user-centered design and agile
development. Two of these were shared between the methodologies in the first place, high user
involvement and close team collaboration. User-centered designs' emphasis on prototyping and the
design life cycle shows that by proposing that design is run a sprint ahead of developers to perform the
research necessary for user-centered design. To facilitate this, strong project management is necessary.
One application of Agile development in HCI is the kind of new idea of live prototyping. Live
prototyping is a bit of an oxymoron, and the fact that it's an oxymoron speaks to how far along
prototyping tools have come. We've gotten to the point in some areas of development where
constructing actual working interfaces is just as easy as constructing prototypes. So here's one example
of this, it's a tool we use at Udacity called Optimizely. It allows for drag and drop creation of real
working webpages. The interface is very similar to many of the wire-frame tools out there, and yet this
website is actually live. I can just click a button and this site goes public. Why bother constructing
prototypes before constructing my final interface, when constructing the final interface is as easy as
constructing prototypes? Of course, this only addresses one of the reasons we construct prototypes.
We don't just construct them because they're usually easier, we also construct them to get feedback
before we roll out a bad design to everyone. But when we get to the point of making small little tweaks
or small revisions, or if we have a lot of experience with designing interfaces in the first place, this
might not be a bad place to start. It's especially true if the cost of failure is relatively low, and if the
possible benefit of success is particularly high. I would argue that's definitely the case for any kind of e-
commerce site. The cost of failure is maybe losing a few sales but the possible benefit is gaining more
sales for a much longer time period. I'm sure anyone would risk having fewer sales on one day for the
possible reward of having more sales every subsequent day.
So in some contexts, it's now no harder to construct an actual interface than it is to construct a
prototype, so we might skip the prototyping phase altogether. However, prototypes also allowed us to
gather feedback from users. Even though we can now easily construct an interface, we don't want to
immediately roll out a completely untested interface to everyone who visits our site. We might be able
to fix it quickly, but we're still eroding user trust in us and wasting our user's time. That's where the
second facet of this comes in, AB testing. AB testing is the name given to rapid software testing
between typically two alternatives, A and B. Statistically it's not any different from T-tests. What
makes AB testing unique is that we're usually rapidly testing small changes with real users. We usually
do it by rolling out the B version, the new version to only a small number of users, and ensuring that
nothing goes terribly wrong, or there's not a dramatic dip in performance. That way we can make sure
a change is positive, or at least neutral, before rolling it out to everyone, but look where testing
feedback coming in here. They're coming automatically with the real users during normal usage of our
tool. There's no added cost to recruiting participants and the feedback is received instantly. So for a
quick example, this is the overview page for one of Udacity's programs and it provides a timeline the
students should dedicate to the program in terms of number of hours. Is number of hours the best way
to display this? I don't know, we could find out. Instead of showing 420 hours maybe I say this as 20
hours per week. In this interface all I have to do is edit it and I immediately have a new version of this
interface that I can try out.
Agile development techniques don't replace the design lifecycle, they just caffeinate it. We're still doing
needfinding, but we're probably just doing it a little bit more tacitly by reading user feedback or
checking out interaction logs. We're still brainstorming design alternatives. But we're really just coming
up with them in our head, because we then immediately move them on to prototyping. And our
prototypes are still just prototypes, they just happen to work. And we're still doing evaluation by rolling
our changes out to only certain participants first to make sure the response is good. And the results of
that evaluation then feed the same process over and over again. So taking an agile approach to the
design lifecycle really doesn't change the cycle itself. It just changes the rate at which we go through it,
and the types of prototypes, and the types of evaluation that we actually do. And remember also that
Chamberlain, Sharp, and Maiden advocated still doing the initial needfinding step. Rarely will we go
from no interface at all to a working prototype quite as quickly as we go through revisions of those
working prototypes. And so it's useful to do an initial needfinding phase the way we normally would do
it, and then bursting into a more agile revision process once we have our working prototype to actually
tweak and modify.
Here are five quick tips for using HCI and agile development together, especially for mitigating the risks
to the experience presented by this more agile development process. Number one, [SOUND] start
more traditional, start with a more traditional need, finding, and prototyping process, and shift to more
agile development once you have something up and running. Jacob Nielsen describes this as doing
some foundational user research. Once you have something up and running, you have a way of probing
the user experience further. But you need something solid to begin with, and that comes from the
more traditional process. Number two, focus on small changes. Notice that when I was doing live
prototyping and A-B testing, I was making a small change to an existing interface. Not building an entire
new site from scratch. Number three, adopt a parallel track method. Agile development often uses
short two week sprints in development. Under that setup have the HCI research one sprint ahead of the
implementation. The HCI team can do two week sprints of need finding, prototyping, and low fidelity
evaluation and then hand the results to the development team for their next sprint. Number four, be
careful with consistency. One of our design principals was consistency both within our interfaces and
across interface design as a whole. If your interface caters to frequent visitors or users, you'll want to
be conservative in how often you mess with their expectations. If you're designing for something like a
museum kiosk, though, you can be more liberal in your frequent changes. Number five, nest your
design cycles. In agile development you go through many small design cycles rapidly and each cycle
gives you a tiny bit of new information. Take all that new information you gather and use it in the
context of a broader, more traditional design cycle, aimed at long-term substantive improvements
instead of small optimizations.
Does the area of HCI on which you chose to focus lend itself naturally to agile development? There are
a lot of questions to ask in that area. Are you working in a high stakes area like healthcare or
autonomous vehicles? What's the cost of failure? If it's high, you might want to avoid agile
development. After all, it's built in large part around learning from the real failures of real users. If
that's a user unfairly failing to reach the next of a game, that's probably fine. If that's a doctor entering
the wrong dosage of a medication into a new interface, that's not fine. You also need to think of
development costs. Agile development relies on being able to get a product up and out the door
quickly, and change it frequently. If any part of your design is reliant on the hardware, then agile
development presents challenges. It might be easy to roll out a software update to improve a car screen
interface, but you can't download a car to fix a hardware problem. Now take a moment, and think
about whether agile development would be right for the area of application that you chose.
In this lesson we've covered a small glimpse of how HCI can work in a more agile development
environment. In many ways they're a nice match.
Both emphasize feedback cycles, both emphasize getting user feedback, and both emphasize rapid
changes. But while HCI traditionally has done these behind the scenes before reaching real users, Agile
emphasizes doing these live. Now it's important to note, I've only provided a narrow glimpse into what
Agile development is all about. I've discussed how HCI matches with the theory and the goals of Agile
development, but Agile is a more complex suite of workflows and stakeholders. I really recommend
reading more about it before you try to take an Agile approach to HCI, or before you try to integrate
interaction design into an existing Agile team. As you do though, I think you'll notice that there can be a
really nice compatibility between the two.
Introduction
In this unit we’ve discussed the HCI research methods that form the design life cycle, an iterative
process between needfinding, brainstorming design alternatives, prototyping, and evaluation with real
users.
We also want to tie it into the design principles unit and explore how we can use the design principles
and research methods in conjunction with one another.
Throughout this unit we’ve used the running example of designing an audiobook app that would let
people who are exercising interact with books in all the ways you or I might while sitting and reading.
That means being able to leave bookmarks, take notes, and so on.
We discussed doing our foundational needfinding, going to a park and observing people exercising. We
talked about doing some interviewing and surveys to find out more targeted information about what
people wanted and needed.
Then, we took those alternatives and prototyped a few of them. Specifically, we constructed Wizard of
Oz prototypes for voice and gesture interfaces and paper prototypes for on-screen interfaces.
What’s next? Next, we go through another phase of the design life cycle.
We take the results of our initial iteration through the design cycle and use the results to return to the
needfinding process. That’s not to say we need to redo everything from scratch, but our prototypes and
evaluation have now increased our understanding of the problem. There are things we learn by
prototyping and evaluating about the task itself. In this case, we could have learned that even for
exercisers with their hands free, gestures are still tough because they’re moving around so much. The
evaluation process may have also given us new questions we want to ask users to understand the task
better. For example, Morgan mentioned needing to be able to rewind. We might want to know how
common a problem that is.
In many ways, synthesizing our experiences with the evaluation is our next needfinding process.
Then, more prototyping. At this point, we might discover that as we try to increase the fidelity of our
prototypes, the technology or resources aren’t quite there yet. For example, while the gesture interface
might have been promising in the Wizard of Oz prototype, we don’t yet have the technology to
recognize gestures that way on the go. Or we might have found that the expense related to the
prototype is unfeasible, or the realizing the prototype would require violating some of our other user
needs. So, for example, we could do gesture recognition if we had users hold a physical device that
could recognize gestures, but that might be too expensive to produce, or it might conflict with our
audience’s need for a hands-free system. So we move on with the prototypes that we can build, with
the goal of getting to the feedback stage as quickly as possible. For voice recognition, instead of trying
to build a full voice recognition system, maybe we just build a system that can recognize very simplistic
Then, we evaluate again. This time, we probably get a little more objective. We still want data on the
qualitative user experience, but we also want data on things like: how long does it take a user to
perform the desired actions in the interface? What prevents them from working with the interface?
Imagine that we found, for instance, that for many exercisers, they go through places that are too loud
for voice commands to work. Or, we find that the time it takes to pull out the interface and interact is
too distracting. That information is once again useful to our ongoing iteration. At the end of that
process, we again have some higher-fidelity prototypes, but no product yet. So, we go again.
At the end of the last iteration through the design cycle, we had two interface prototypes, each with
significant weaknesses. Our voice command interface struggled in loud areas where exercises are often
exercising, and our screen-based interface presented too high a gulf of execution. But notice how far
we’ve come at this point. We now have a pretty complete and nuanced view of the task and our
possible solutions. Now, let’s go through one more iteration to get to something we can actually
implement and deploy.
Our needfinding has come along to the point of understanding that completely hands-free interfaces are
more usable, but we also know that gesture-based is technologically unfeasible and voice-based isn’t
perfectly reliable.
So, we create a new prototype, basically merging our two from the previous iteration. They’re still
reasonably low fidelity because we haven’t tested this combination yet, and the next stage of
sophistication is going to be expensive. So, we want to make sure it’s worth pursuing.
So that’s the end, right? We went through a few iterations of the design life cycle getting iteratively
more high-fidelity and rigorous with our evaluation. Finally, we have a design we like. We implement it
fully, submit it to the app store, and sit back while the money rolls in. Not exactly. Now instead of
having a handful of users we bring in to use our interface, we have hundreds of users using it in ways we
never expected. And now the cycle begins again. We have data we’re automatically collected either
through usage tracking or error logs. We have user reviews or feedback they submit. So, we jump back
into needfinding using the data we have available to us. We might find subtle needs, like the need for
more control over rewinding and fast forwarding. We might move on and prototype that with
commands like ‘back 5’ and ‘back 15’. We might uncover more novel new needs as well: we find there’s
a significant contingent of people using the interface while driving. It’s similar in that it’s another place
where people’s hands and eyes are occupied, but it has its own unique needs as well, like the ability to
run alongside a navigation app. So the process starts again, this time with live users’ data. And in
general, it never really ends. Nowadays, you very rarely see interfaces, apps, programs, or web sites
that are intentionally put up once and never changed. That might happen because the designers got
busy or the company went out of business, but it’s rarely one-off by design. And as the design evolves
over time with real data, you’ll start to see nested feedback cycles: week to week small additions give
way to month-to-month updates and year-to-year reinventions. In many ways, your interface becomes
like a child: you watch it grow up and take on a life of its own.
The design principles we describe in our other unit are deeply integrated throughout this design life
cycle. They don’t supplant it -- you won’t be making any great designs just by applying principles -- but
they can streamline things. In many ways, design principles capture takeaways and conclusions found
by this design life cycle in the past in ways that can be transferred to new tasks or new interfaces. Many
of our needs are driven by our current understanding of human abilities. Task analysis allows us to
describe those needs, those tasks, in formal ways to equip the interface design process. And cognitive
load lets us keep in mind how much users are asked to do at a time. Direct manipulation gives us a
family of techniques that we want to emphasize in coming up with our design alternatives. Mental
models provide us an understanding of how the design alternatives might mesh with the user’s
understanding of the task. And distributed cognition gives us a view on interface design that lends itself
to design at a larger level of granularity. Here we're designing systems, not just interfaces. Design
principles in general give us some great rules of thumb to use when creating our initial prototypes and
designs. Our understanding of representations ensures that the prototypes we create match with users’
mental models that we uncovered before. Invisible interfaces help us remember that the interface
should be the conduit between the user and the task, not the focus of attention itself. Then the
vocabulary of the feedback cycle, the gulfs of execution and evaluation, give us ways to evaluate and
describe our evaluations of the interfaces that we design. The notion of politics and values in interfaces
allow us to evaluate the interface not just in terms of its usable interactions, but in the types of society it
creates or preserves. And those usability heuristics that we applied to our prototyping are also a way of
evaluating our interface and mentally simulating what a user will be thinking while using our creation.
These principles of HCI were all found through many years of going through the design life cycle,
creating different interfaces, and exploring and evaluating their impact. By leveraging those lessons, we
can speed to usable interfaces much faster. But applying those lessons doesn't remove the need to go
talk to real users.
Over the past several lessons, you’ve been exploring how the design life cycle applies to the area of HCI
that you chose to explore. Now that we’ve reached the end of the unit, take a moment and reflect on
the life cycle you developed. How feasible would it be to actually execute? What would you need?
What kind of users do you need? How many? When do you need them? There are right answers here,
of course: ideally, you’ll need users early and often. That’s what user-centered design is all about. In
educational technology, that might mean having some teachers, students, and parents you can contact
frequently. In computer-supported cooperative work, that might mean having a community you can
visit often to see the new developments. In ubiquitous computing, that might mean going as far as
having someone who specializes in low-fidelity 3D prototypes to quickly spin up new ideas for testing.
Now that you understand the various phases of the design life cycle, take a moment and reflect on how
you’ll use it iteratively as a whole in your chosen area of HCI.
At a minimum, user-centered design advocates involving users throughout the process through surveys,
interviews, evaluations, and more that we’ll talk about. However, user-centered design can be taken to
even greater extremes through a number of approaches beyond what we’ve covered.
One is called participatory design. In participatory design, all the stakeholders -- including the users
themselves -- are involved as part of the design team. They aren’t just a source of data, they’re actually
members of the design team working on the problem. That allows the user perspective to be pretty
omnipresent throughout the design process. Of course, there’s still a danger there: generally, we are
not our user, but in participatory design one of the designers is the user… but they’re not the only user.
So, it’s a great way to get a user’s perspective, but we must also be careful not to over-represent that
one user’s view.
A second approach is action research. Action research is a methodology that addresses an immediate
problem, and researches it by trying to simultaneously solve it. Data gathered on the success of the
A third approach is design-based research. Design-based research is similar to action research, but it
can be done by outside practitioners, as well. It’s especially common in learning sciences research. In
design-based research, designers create interventions based on current understanding of the theory and
the problem, and use the success of those interventions to improve our understanding of the theory or
the problem. For example, if we believed a certain intersection had a lot of jaywalkers because the signs
had poor visibility, we might interview people at the intersection for their thoughts. Or, we could create
a solution that assumes we’re correct, and then use it to evaluate whether or not we were correct. If
we create a more clearly visible sign and it fixes the problem, then it suggests our initial theory was
correct. In all these approaches, notice iteration still plays a strong roll: we never try out just one design
and stop. We run through the process, create a design, try it out, and then iterate and improve on it.
Interface design is never done: it just gets better and better as time goes on, while also adjusting to new
trends and technologies.
This wraps up our conversation on research methods and the design life cycle. The purpose of this is to
put a strong focus on user-centered design throughout the process.
We want to start our designs by understanding user needs, then get user feedback throughout the
design process. As we do, our understanding of the user and the task improves, and our designs
improve along with it. Even after we’ve released our designs, modern technology allow us to continue
that feedback cycle, continually improving our interfaces and further enhancing the user experience.
Introduction to Applications
If you’re watching this course in the order it was originally produced, you’ve now learned the
foundational principles of HCI and the research methods behind developing interfaces. At the beginning
of the course, we also previewed some of the open areas of HCI development and research. Now, what
we’d like to do is give you a jump start in looking into the areas that are interesting to you. In the
lessons that follow, we’ll replay the preview videos for each topic from the beginning of the course, and
then provide a small library of information on each topic. We certainly don’t expect you to go through
every portion of all of these lessons. Find what’s interesting to you, and use these materials as your
jumping-off point.
The year that I'm recording this is what many have described as the year that virtual reality finally hits
the mainstream. By the time you watch this, you'll probably be able to assess whether or not that was
true, so come back in time and let me know. Virtual reality is an entire new classification of interaction
and visualization and we're definitely still at the beginning of figuring out what we can do with these
new tools. You could be one of the ones who figures out the best way to solve motion sickness or how
to get proper feedback on gestural interactions.
A lot of the press around virtual reality has been around video games, but that's definitely not the only
application. Tourism, commerce, art, education, virtual reality has applications to dozens of spaces.
Lécuyer, A., Lotte, F., Reilly, R. B., Leeb, R., Hirose, M., & Slater, M. (2008). Brain-computer
interfaces, virtual reality, and videogames. IEEE Computer, 41(10), 66-72.
Sutcliffe, A., & Gault, B. (2004). Heuristic evaluation of virtual reality applications. Interacting
with Computers, 16(4), 831-849.
Riva, G., Baños, R. M., Botella, C., Mantovani, F., & Gaggioli, A. (2016). Transforming Experience:
The Potential of Augmented Reality and Virtual Reality for Enhancing Personal and Clinical
Change. Frontiers in Psychiatry, 7.
Gugenheimer, J., Wolf, D., Eiriksson, E. R., Maes, P., & Rukzio, E. (2016, October). Gyrovr:
Simulating inertia in virtual reality using head worn flywheels. In Proceedings of the 29th Annual
Symposium on User Interface Software and Technology (pp. 227-232). ACM.
Bian, D., Wade, J., Warren, Z., & Sarkar, N. (2016, July). Online Engagement Detection and Task
Adaptation in a Virtual Reality Based Driving Simulator for Autism Intervention. In International
Conference on Universal Access in Human-Computer Interaction (pp. 538-547). Springer
International Publishing.
North, M. M., & North, S. M. (2016). Virtual reality therapy. In Computer-Assisted and Web-
Based Innovations in Psychology, Special Education, and Health, 141.
Recommended Books:
Biocca, F., & Levy, M. R. (Eds.). (2013). Communication in the age of virtual reality. Routledge.
Earnshaw, R. A. (Ed.). (2014). Virtual reality systems. Academic Press.
Dedicated Courses:
Virtual reality generally works by replacing the real world's visual, auditory, and sometimes even all
factory or kinesthetic stimuli with it's own input. Augmented reality on the other hand, compliments
what you see and hear in the real world. So for example, imagine a headset like a Google Glass that
automatically overlays directions right on your visual field. If you were driving, it would highlight the
route to take, instead of just popping up some visual reminder. The input it provides complements
stimuli coming from the real world, and instead of just replacing them. And that creates some
enormous challenges, but also some really incredible opportunities as well.
Imagine the devices that can integrate directly into our everyday lives, enhancing our reality. Imagine
systems that could, for example, automatically translate text or speech in a foreign language, or could
show your reviews for restaurants as you walk down the street. Imagine a system that students could
use while touring national parks or museums, that would automatically point out interesting
information, custom tailored to that student's own interests. The applications of augmented reality
could be truly stunning, but it relies on cameras to take input from the world, and that actually raises
Olsson, T., Lagerstam, E., Kärkkäinen, T., & Väänänen-Vainio-Mattila, K. (2013). Expected user
experience of mobile augmented reality services: a user study in the context of shopping
centres. Personal and Ubiquitous Computing, 17(2), 287-304.
Chang, K. E., Chang, C. T., Hou, H. T., Sung, Y. T., Chao, H. L., & Lee, C. M. (2014). Development
and behavioral pattern analysis of a mobile guide system with augmented reality for painting
appreciation instruction in an art museum. Computers & Education, 71, 185-197.
Hürst, W., & Van Wezel, C. (2013). Gesture-based interaction via finger tracking for mobile
augmented reality. Multimedia Tools and Applications, 62(1), 233-258.
Lv, Z., Halawani, A., Feng, S., Ur Réhman, S., & Li, H. (2015). Touch-less interactive augmented
reality game on vision-based wearable device. Personal and Ubiquitous Computing, 19(3-4), 551-
567.
Lee, K. (2012). Augmented reality in education and training. TechTrends, 56(2), 13-21.
Suárez-Warden, F., Barrera, S., & Neira, L. (2015). Communicative Learning for Activity with
Students Aided by Augmented Reality within a Real Time Group HCI. Procedia Computer Science,
75, 226-232.
Anderson, F., Grossman, T., Matejka, J., & Fitzmaurice, G. (2013, October). YouMove: enhancing
movement training with an augmented reality mirror. In Proceedings of the 26th Annual ACM
Symposium on User Interface Software and Technology (pp. 311-320). ACM.
Zhu, E., Hadadgar, A., Masiello, I., & Zary, N. (2014). Augmented reality in healthcare education:
an integrative review. PeerJ, 2, e469.
Recommended Books:
Schmalstieg, D. & Hollerer, T. (2016). Augmented Reality: Principles and Practice. Addison-
Wesley Professional.
Craig, A. B. (2013). Understanding Augmented Reality: Concepts and Applications. Newnes.
Dedicated Courses:
Internet of Things & Augmented Reality Emerging Technologies, Yonsei University via Coursera
Getting started with Augmented Reality, Institut Mines-Télécom
Ubiquitous Computing refers to trend towards embedding computing power in more and more
everyday objects. You might also hear it referred to as pervasive computing, and it's deeply related to
the emerging idea of an Internet of Things. A few years ago, you wouldn't have found computers in
refrigerators and wristwatches, but as microprocessors became cheaper and as the world became
increasingly interconnected, computers are becoming more and more ubiquitous.
Modern HCI means thinking about whether someone might use a computer while they're driving a car
or going on a run. It means figuring out how to build smart devices that offloads some of the cognitive
This push for increasing pervasiveness has also lead to [SOUND] the rise of wearable technologies.
Exercise monitors are probably the most common examples of this, but smart watches, Google Glass,
augmented reality headsets, and even things like advanced hearing aids and robotic prosthetic limbs,
are all examples of wearable technology. This push carries us into areas usually reserved for human
factors engineering and industrial design, which exemplifies the increasing role of HCI in the design of
new products.
Starner, T. (2013). Project glass: An extension of the self. IEEE Pervasive Computing, 12(2), 14-
16.
Lv, Z., Halawani, A., Feng, S., Ur Réhman, S., & Li, H. (2015). Touch-less interactive augmented
reality game on vision-based wearable device. Personal and Ubiquitous Computing, 19(3-4), 551-
567.
Clear, A. K., Comber, R., Friday, A., Ganglbauer, E., Hazas, M., & Rogers, Y. (2013, September).
Green food technology: UbiComp opportunities for reducing the environmental impacts of food.
In Proceedings of the 2013 ACM Conference on Pervasive and Ubiquitous Computing, 553-558.
ACM.
De Haan, G. (2013, April). A Vision of the Future of Media Technology Design Education-design
and education from HCI to UbiComp. In Proceedings of the 3rd Computer Science Education
Research Conference on Computer Science Education Research, 67-72. Open Universiteit,
Heerlen.
Recommended Books:
Dedicated Courses:
A lot of the current focus on robotics is on their physical construction and abilities or on the artificial
intelligence that underlies their physical forms. But as robotics becomes more and more mainstream,
we're going to see the emergence of a new subfield of human-computer interaction, human-robot
interaction. The field actually already exists. The first conference on human robot interaction took
place in 2006 in Salt Lake City, and several similar conferences have been created since then. Now as
robots enter the mainstream, we're going to have to answer some interesting questions about how we
interact with them.
For example, how do we ensure that robots don't harm humans through faulty reasoning. How do we
integrate robots into our social lives, or do we even need to? As robots are capable of more and more,
how do we deal with the loss of demand for human work? Now these questions all lie at the
intersection of HCI, artificial intelligence and philosophy in general. But there are some more concrete
questions we can answer as well. How do we pragmatically equip robots with the ability to naturally
Glas, D., Satake, S., Kanda, T., & Hagita, N. (2012, June). An interaction design framework for
social robots. In Robotics: Science and Systems 7, 89 - 96.
Toris, R., Kent, D., & Chernova, S. (2014). The robot management system: A framework for
conducting human-robot interaction studies through crowdsourcing. Journal of Human-Robot
Interaction, 3(2), 25-49.
Beer, J., Fisk, A. D., & Rogers, W. A. (2014). Toward a framework for levels of robot autonomy in
human-robot interaction. Journal of Human-Robot Interaction, 3(2), 74.
Schirner, G., Erdogmus, D., Chowdhury, K., & Padir, T. (2013). The future of human-in-the-loop
cyber-physical systems. Computer, 1, 36-45.
Ruhland, K., Peters, C. E., Andrist, S., Badler, J. B., Badler, N. I., Gleicher, M., ... & McDonnell, R.
(2015, September). A Review of Eye Gaze in Virtual Agents, Social Robotics and HCI: Behaviour
Generation, User Interaction and Perception. In Computer Graphics Forum 34(6), 299-326.
Naveed, S., Rao, N. I., & Mertsching, B. (2014). Multi Robot User Interface Design Based On HCI
Principles. International Journal of Human Computer Interaction (IJHCI), 5(5), 64.
Złotowski, J., Proudfoot, D., Yogeeswaran, K., & Bartneck, C. (2015). Anthropomorphism:
opportunities and challenges in human–robot interaction. International Journal of Social
Robotics, 7(3), 347-360.
Broz, F., Nourbakhsh, I., & Simmons, R. (2013). Planning for human–robot interaction in socially
situated tasks. International Journal of Social Robotics, 5(2), 193-214.
Recommended Books:
Dautenhahn, K. & Saunders, J. (Eds.) (2011). New Frontiers in Human-Robot Interaction. John
Benjamins Publishing Co.
Rahimi, M. & Karwowski, W. (Eds.) (2003). Human-Robot Interaction. Taylor & Francis.
Dedicated Courses:
One of the biggest changes to computing over the past several years has been the incredible growth of
mobile as a computing platform. We really live in a mobile first world and that introduces some
significant design challenges.
Screen real estate is now far more limited, the input methods are less precise and the user is distracted.
But mobile computing also presents some really big opportunities for HCI. Thanks in large part to
mobile we're no longer interested just in a person sitting in front of a computer. With mobile phones,
most people have a computer with them at all times anyway.
For me, the big one is that we haven't yet reached a point where we can use mobile phones for all the
tasks we do on computers. Smart phones are great for social networking, personal organization,
games, and lots of other things. But we haven't yet reached a point where the majority of people would
sit down to write an essay, or do some programming on smart phones. Why haven't we? What do we
need to do to make smart phones into true replacements for traditional desktop and laptop
computers?
Kjeldskov, J., & Paay, J. (2012, September). A longitudinal review of Mobile HCI research
methods. In Proceedings of the 14th International Conference on Human-Computer Interaction
with Mobile Devices and Services, 69-78. ACM.
Kjeldskov, J., & Skov, M. B. (2014, September). Was it worth the hassle?: ten years of mobile HCI
research discussions on lab and field evaluations. In Proceedings of the 16th International
Conference on Human-Computer Interaction with Mobile Devices & Services, 43-52. ACM.
McMillan, D., Morrison, A., & Chalmers, M. (2013, April). Categorised ethical guidelines for large
scale mobile HCI. In Proceedings of the SIGCHI Conference on Human Factors in Computing
Systems, 1853-1862. ACM.
Sakamoto, D., Komatsu, T., & Igarashi, T. (2013, August). Voice augmented manipulation: using
paralinguistic information to manipulate mobile devices. In Proceedings of the 15th
International Conference on Human-Computer Interaction with Mobile Devices and Services, 69-
78. ACM.
Conference Proceedings:
Recommended Books:
Dedicated Courses:
Did that exchange make any sense? I asked Amanda for the time and she replied by saying I can go
ahead and go get lunch. The text seems completely non-sensical and yet hearing that, you may have
filled in the context that makes this conversation logical. You might think that I asked a while ago what
time we were breaking for lunch, or maybe I mentioned that I forgot to eat breakfast. Amanda would
have that context and she could use it to understand why I'm probably asking for the time. Context is a
fundamental part of the way humans interact with other humans. Some lessons we'll talk about even
suggest that we are completely incapable of interacting without context.
That's where context-sensitive computing comes in. Context-sensitive computing attempts to give
computer interfaces the contextual knowledge that humans have in their everyday lives. For example, I
use my mobile phone differently depending on whether I'm sitting on the couch at home, or using it in
my car, or walking around on the sidewalk. Imagine I didn't have to deliberately inform my phone of
what mode I was in though. Imagine if it just detected that I was in my car and automatically brought
up Google Maps and Audible for me. Services have started to emerge to provide this, but there's an
enormous amount of research to be done on contact sensitive computing. Especially as it relates to
things like wearbles, augmented reality, and ubiquitous computing.
Abowd, G. D., Dey, A. K., Brown, P. J., Davies, N., Smith, M., & Steggles, P. (1999, September).
Towards a better understanding of context and context-awareness. In International Symposium
on Handheld and Ubiquitous Computing (pp. 304-307). Springer Berlin Heidelberg.
Yau, S. S., Karim, F., Wang, Y., Wang, B., & Gupta, S. K. (2002). Reconfigurable context-sensitive
middleware for pervasive computing. IEEE Pervasive Computing, 1(3), 33-40.
Brown, B., & Randell, R. (2004). Building a context sensitive telephone: some hopes and pitfalls
for context sensitive computing. Computer Supported Cooperative Work (CSCW), 13(3-4), 329-
345.
Bouabid, A., Lepreux, S., Kolski, C., & Havrez, C. (2014, July). Context-sensitive and Collaborative
application for Distributed User Interfaces on tabletops. In Proceedings of the 2014 Workshop
on Distributed User Interfaces and Multimodal Interaction (pp. 23-26). ACM.
Recommended Books:
Stojanovic, Dragan (Ed.). (2009). Context-Aware Mobile and Ubiquitous Computing for Enhanced
Usability: Adaptive Technologies and Applications: Adaptive Technologies and Applications.
Information Science Reference.
Brezillion, P. & Gonzalez, A. (Eds.). (2014). Context in Computing: A Cross-Disciplinary Approach
for Modeling the Real World. Springer.
Dedicated Courses:
Out of Context: A Course on Computer Systems That Adapt To, and Learn From, Context, from
MIT OpenCourseware
As this course goes on, you'll find that I'm on camera more often than you're accustomed to seeing in a
Udacity course. Around half this course takes place with me on camera. There are a couple of reasons
for that. The big one is that this is Human Computer Interaction. So it makes sense to put strong
emphasis on the Human. But another big on is that when I'm on camera, I can express myself through
gestures instead of just word and voice intonations. I can for example make a fist and really drive home
and emphasize a point. I can explain that a topic applies to a very narrow portion of the field or a very
wide portion of the field. We communicate naturally with gestures every day. In fact, we even have an
entire language built out of gestures. So wouldn't it be great if our computers could interpret our
gestures as well?
That's the emerging field of Gesture-Based Interaction. You've seen this with things like the Microsoft
Connect which has far reaching applications from healthcare to gaming. We've started to see some
applications of gesture based interaction on the go as well with wrist bands that react to certain hand
motions. Gesture based interaction has enormous potential. The fingers have some of the finest muscle
Waldherr, S., Romero, R., & Thrun, S. (2000). A gesture based interface for human-robot
interaction. Autonomous Robots, 9(2), 151-173.
Hürst, W., & Van Wezel, C. (2013). Gesture-based interaction via finger tracking for mobile
augmented reality. Multimedia Tools and Applications, 62(1), 233-258.
Lv, Z., Halawani, A., Feng, S., Ur Réhman, S., & Li, H. (2015). Touch-less interactive augmented
reality game on vision-based wearable device. Personal and Ubiquitous Computing, 19(3-4), 551-
567.
Steins, C., Gustafson, S., Holz, C., & Baudisch, P. (2013, August). Imaginary devices: gesture-
based interaction mimicking traditional input
devices.](http://www.seangustafson.com/static/papers/2013-MobileHCI-Steins-
ImaginaryDevices.pdf) In Proceedings of the 15th International Conference on Human-Computer
Interaction with Mobile Devices and Services (pp. 123-126). ACM.
Lu, Z., Chen, X., Li, Q., Zhang, X., & Zhou, P. (2014). A hand gesture recognition framework and
wearable gesture-based interaction prototype for mobile devices..pdf) IEEE Transactions on
Human-Machine Systems, 44(2), 293-299.
Mazalek, A., Shaer, O., Ullmer, B., & Konkel, M. K. (2014). Tangible Meets Gestural: Gesture
Based Interaction with Active Tokens. In ACM CHI 2014 Workshop on Gesture-Based Interaction
Design, ACM CHI.
Wilson, A. D. (2004, October). TouchLight: an imaging touch screen and display for gesture-
based interaction. In Proceedings of the 6th International Conference on Multimodal Interfaces
(pp. 69-76). ACM.
Recommended Books:
Premaratne, P. (2014). Human Computer Interaction Using Hand Gestures. Springer Science &
Business Media.
Ji, Y. & Choi, S. (Eds.). Advances in Affective and Pleasurable Design. AHFE Conference.
Dedicated Courses:
I always find it interesting how certain technologies seem to come around full circle. For centuries we
only interacted directly with the things that we built and then computers came along. And suddenly we
needed interfaces between us and our tasks. Now, computers are trying to actively capture natural
ways we've always interacted. Almost every computer I encounter now days has a touch screen. That's
a powerful technique for creating simple user interfaces because it shortens the distance between the
user and the tasks they’re trying to accomplish. Think about someone using a mouse for the first time.
He might need to look back and forth from the screen to the mouse, to see how interacting down here,
change things he sees up here. With a touch based interface, he interacts the same way he uses things
in the real world around him. A challenge can sometimes be a lack of precision, but to make up for that
we've also create pen based interaction. Just like a person can use a pen on paper, they can also use a
pen on a touch screen. And in fact, you might be quite familiar with that, because most Udacity courses
use exactly that technology. They record someone writing on a screen. That gives us the precision
necessary to interact very delicately and specifically with our task. And as a result tablet based
interaction methods have been used in fields like art and music. Most comics you find on the internet
are actually drawn exactly like this, combining the precision of human fingers with the power of
computation.
Moran, T. P., Chiu, P., & Van Melle, W. (1997, October). Pen-based interaction techniques for
organizing material on an electronic whiteboard. In Proceedings of the 10th Annual ACM
Symposium on User Interface Software and Technology (pp. 45-54). ACM.
Ren, X., & Moriya, S. (2000). Improving selection performance on pen-based systems: a study of
pen-based interaction for selection tasks. ACM Transactions on Computer-Human Interaction
(TOCHI), 7(3), 384-416.
Vogel, D., & Baudisch, P. (2007, April). Shift: a technique for operating pen-based interfaces
using touch. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
(pp. 657-666). ACM.
Wilkinson, G., Kharrufa, A., Hook, J. D., Pursgrove, B., Wood, G., Haeuser, H., ... & Olivier, P.
(2016, May). Expressy: Using a Wrist-worn Inertial Measurement Unit to Add Expressiveness to
Touch-based Interactions. In Proceedings of the ACM Conference on Human Factors in
Computing Systems: Association for Computing Machinery (ACM).
Hardy, R., & Rukzio, E. (2008, September). Touch & interact: touch-based interaction of mobile
phones with displays. In Proceedings of the 10th International Conference on Human-Computer
Interaction with Mobile Devices and Services (pp. 245-254). ACM.
Häikiö, J., Wallin, A., Isomursu, M., Ailisto, H., Matinmikko, T., & Huomo, T. (2007, September).
Touch-based user interface for elderly users. In Proceedings of the 9th International Conference
on Human-Computer Interaction with Mobile Devices and Services (pp. 289-296). ACM.
Recommended Books:
Annett, M. (2014). The Fundamental Issues of Pen-based Interaction with Tablet Devices.
University of Alberta.
Berque, D., Prey, J., & Reed, R. (2006). The Impact of Tablet PCs and Pen-based Technology on
Education: Vignettes, Evaluations, and Future Directions. Purdue University Press.
Reed, R. (2010). The Impact of Tablet PCs and Pen-based Technology on Education: Going
Mainstream. Purdue University Press.
Dedicated Courses:
One of the biggest trends of the information age is the incredible availability of data. Scientists and
researchers use data science and machine learning to look at lots of data and draw conclusions. But
often times those conclusions are only useful if we can turnaround and communicate them to ordinary
people. That's where information visualization comes in. Now at first glance you might not think of
data visualization as an example of HCI. After all, I could draw a data visualization on a napkin and print
in a newspaper and there's no computer involved anywhere in that process. But computers give us a
powerful way to re-represent data in complex, animated, and interactive ways. We'll put links to some
excellent examples in the notes. Now what's particularly notable about data visualization in HCI is the
degree with which it fits perfectly with our methodologies for designing good interfaces. One goal of a
good interface is to match the user's mental model to the reality of the task at hand. In the same way,
the goal of information visualization is to match the reader's mental model of the phenomenon to the
reality of it. So the same principles we discussed for designing good representations apply directly to
designing good visualizations. After all, a visualization is just a representation of data.
Stasko, J. T. (1990). Tango: A framework and system for algorithm animation. Computer, 23(9),
27-39.
Fekete, J. D., Van Wijk, J. J., Stasko, J. T., & North, C. (2008). The value of information
visualization. In Information visualization. Springer Berlin Heidelberg.
Kapler, T., & Wright, W. (2005). GeoTime information visualization. Information Visualization,
4(2), 136-146.
Gleicher, M., Albers, D., Walker, R., Jusufi, I., Hansen, C. D., & Roberts, J. C. (2011). Visual
comparison for information visualization. Information Visualization, 10(4), 289-309.
Hundhausen, C. D., Douglas, S. A., & Stasko, J. T. (2002). A meta-study of algorithm visualization
effectiveness. Journal of Visual Languages & Computing, 13(3), 259-290.
Recommended Books:
Dedicated Courses:
CSCW stands for Computer-Supported Cooperated Work. The field is just what the name says. How do
we use computers to support people working together. You're watching this course online. So odds are
that you've experienced this closely. Maybe you've worked on a group project with a geographically
distributed group. Maybe you've had a job working remotely. Distributed teams are one example of
CSCW in action but there are many others. The community often breaks things down into two
dimensions.
Time and place. We can think of design as whether or not we're designing for the users in the same
time and place or users at different times in different places. This course is an example of designing for
different time and different place. You're watching this long after I recorded this, likely from far away
from our studio. Work place chat utilities like slack and hipchat would be examples of same time,
different place. They allow people to communicate instantly across space, mimicking the real-time
Schmidt, K., & Bannon, L. (1992). Taking CSCW seriously. Computer Supported Cooperative Work
(CSCW), 1(1-2), 7-40.
Schmidt, K., & Simonee, C. (1996). Coordination mechanisms: Towards a conceptual foundation
of CSCW systems design. Computer Supported Cooperative Work (CSCW), 5(2-3), 155-200.
Ackerman, M. S. (2000). The intellectual challenge of CSCW: the gap between social
requirements and technical feasibility. Human-Computer Interaction, 15(2), 179-203.
Bruckman, A. (1998). Community support for constructionist learning. Computer Supported
Cooperative Work (CSCW), 7(1-2), 47-86.
Bruckman, A., Karahalios, K., Kraut, R. E., Poole, E. S., Thomas, J. C., & Yardi, S. (2010). Revisiting
research ethics in the Facebook era: Challenges in emerging CSCW research. In CSCW 2010:
Proceedings of the 2010 Conference on Computer Supported Cooperative Work.
Luther, K., Fiesler, C., & Bruckman, A. (2013, February). Redistributing leadership in online
creative collaboration. In Proceedings of the 2013 Conference on Computer-Supported
Cooperative Work (pp. 1007-1022). ACM.
Recommended Books:
Lubich, H. (1995). Towards a CSCW Framework for Scientific Cooperation in Europe. Spring
Science & Business Media.
Diaper, D. & Sanger, C. (Eds.). (2012). CSCW in Practice: an Introduction and Case Studies.
Social computing is the portion of HCI that's interested in how computers affect the way we interact
and socialize. One thing that falls under this umbrella is the idea of recreating social norms within
computational systems. So for example, when you chat online, you might often use emojis or
emoticons. Those are virtual recreations of some of the tacit interaction we have with each other on a
day-to-day basis. So, for example, these all take on different meanings depending on the emotion
provided. Social computing is interested in a lot more than just emojis, of course.
From online gaming and Wikipedia, to social media, to dating websites, social computing is really
interested in all areas where computing intersects with our social lives.
Wang, F. Y., Carley, K. M., Zeng, D., & Mao, W. (2007). Social computing: From social informatics
to social intelligence. IEEE Intelligent Systems, 22(2), 79-83.
Parameswaran, M., & Whinston, A. B. (2007). Research issues in social computing. Journal of the
Association for Information Systems, 8(6), 336.
Wang, F. Y. (2007). Toward a paradigm shift in social computing: the ACP approach. IEEE
Intelligent Systems, 22(5), 65-67.
Vassileva, J. (2012). Motivating participation in social computing applications: a user modeling
perspective. User Modeling and User-Adapted Interaction, 22(1-2), 177-201.
Scekic, O., Truong, H. L., & Dustdar, S. (2013). Incentives and rewarding in social computing.
Communications of the ACM, 56(6), 72-82.
Luther, K., Fiesler, C., & Bruckman, A. (2013, February). Redistributing leadership in online
creative collaboration. In Proceedings of the 2013 Conference on Computer-Supported
Cooperative Work (pp. 1007-1022). ACM.
Recommended Books:
Dedicated Courses:
One of the most exciting application areas for HCI is in helping people with special needs. Computing
can help us compensate for disabilities, injuries, aging. Think of a robotic prosthetic, for example. Of
course, part of that is engineering, part of it is neuroscience. But it's also important to understand how
the person intends to use such a limb in the tasks they need to perform. That's HCI intersecting with
robotics.
Or take another example from some work done here at Georgia Tech by Bruce Walker, how do you
communicate data to a blind person? We've talked about informational visualization, but if it's a
Abascal, J., & Nicolle, C. (2005). Moving towards inclusive design guidelines for socially and
ethically aware HCI. Interacting with Computers, 17(5), 484-505.
Bian, D., Wade, J., Warren, Z., & Sarkar, N. (2016, July). Online Engagement Detection and Task
Adaptation in a Virtual Reality Based Driving Simulator for Autism Intervention. In International
Conference on Universal Access in Human-Computer Interaction (pp. 538-547). Springer
International Publishing.
Biswas, P. (2007). Simulating HCI for special needs. ACM SIGACCESS Accessibility and Computing,
(89), 7-10.
Frauenberger, C., Good, J., & Keay-Bright, W. (2011). Designing technology for children with
special needs: bridging perspectives through participatory design. CoDesign, 7(1), 1-28.
Recommended Books:
Miesenberger, K., Fels, D., Archambault, D., Penaz, P., & Zagler, W. (Eds.). (2014). Computers
Helping People with Special Needs: 14th International Conference, ICCHP 2014, Paris, France,
July 9-11, 2014, Proceedings. Springer.
Antona, M. & Stephanidis, C. (2016). Universal Access in Human-Computer Interaction. Methods,
Techniques, and Best Practices: 10th International Conference, UAHCI 2016, Held as Part of HCI
International 2016, Toronto, ON, Canada, July 17-22, 2016, Proceedings. Springer.
Hi, and welcome to educational technology. My name is David Joyner and I'm thrilled to bring you this
course.
As you might guess, education is one of my favorite application areas of HCI. In fact, as I'm recording
this, I've been teaching educational technology at Georgia Tech for about a year, and a huge portion of
designing educational technology is really just straightforward HCI. But education puts some unique
twists on the HCI process. Most fascinatingly, education is an area where you might not always want to
make things as easy as possible. You might use HCI to introduce some desirable difficulties, some
learning experiences for students. But it's important to ensure that the cognitive loads students
experience during a learning task is based on the material itself. Not based on trying to figure out our
interfaces. The worst thing you can do in HCI for education is raise the student's cognitive load because
they're too busy thinking about your interface instead of the subject matter itself. Lots of very noble
Rutten, N., van Joolingen, W. R., & van der Veen, J. T. (2012). The learning effects of computer
simulations in science education. Computers & Education, 58(1), 136-153.
Tondeur, J., van Braak, J., Sang, G., Voogt, J., Fisser, P., & Ottenbreit-Leftwich, A. (2012).
Preparing pre-service teachers to integrate technology in education: A synthesis of qualitative
evidence. Computers & Education, 59(1), 134-144.
Lee, K. (2012). Augmented reality in education and training. TechTrends, 56(2), 13-21.
Zhu, E., Hadadgar, A., Masiello, I., & Zary, N. (2014). Augmented reality in healthcare education:
an integrative review. PeerJ, 2, e469.
Wu, H. K., Lee, S. W. Y., Chang, H. Y., & Liang, J. C. (2013). Current status, opportunities and
challenges of augmented reality in education. Computers & Education, 62, 41-49.
Merchant, Z., Goetz, E. T., Cifuentes, L., Keeney-Kennicutt, W., & Davis, T. J. (2014). Effectiveness
of virtual reality-based instruction on students' learning outcomes in K-12 and higher education:
A meta-analysis. Computers & Education, 70, 29-40.
Recommended Books:
Tsai, C., Heller, R., Nussbaum, M., & Twining, P. (Eds.). Computers & Education. Elsevier.
Maddux, C. & Johnson, D. (2013). Technology in Education: A Twenty-Year Retrospective.
Routledge.
Dedicated Courses:
A lot of current efforts in healthcare are about processing the massive quantities of data that are
recorded everyday. But in order to make that data useful, it has to connect to real people at some
point. Maybe it's equipping doctors with tools to more easily visually evaluate and compare different
diagnoses. Maybe it's giving patients the tools necessary to monitor their own health and treatment
options. Maybe that's information visualization so patients can understand how certain decisions affect
their well-being. Maybe it's context aware computing that can detect when patients are about to do
something they probably shouldn't do. There are also numerous applications of HCI to personal health
like Fitbit for exercise monitoring or MyFitnessPal for tracking your diet. Those interfaces succeed if
they're easily usable for users. Ideally, they'd be almost invisible. But perhaps the most fascinating
upcoming intersection of HCI and health care is in virtual reality. Virtual reality exercise programs are
already pretty common to make living an active lifestyle more fun, but what about virtual reality for
therapy? That's actually already happening. We can use virtual reality to help people confront fears and
anxieties in a safe, but highly authentic place. Healthcare in general is concerned with the health of
humans. And computers are pretty commonly used in modern healthcare. So the applications of
human computer interaction to healthcare are really huge.
Alpay, L., Toussaint, P., & Zwetsloot-Schonk, B. (2004, June). Supporting healthcare
communication enabled by information and communication technology: Can HCI and related
cognitive aspects help? In Proceedings of the Conference on Dutch Directions in HCI (p. 12). ACM.
Riche, Y., & Mackay, W. (2005, September). PeerCare: Challenging the monitoring approach to
care for the elderly. In HCI and the older population. In Proceedings of the 19th British HCI Group
Annual Conference.
Zhu, E., Hadadgar, A., Masiello, I., & Zary, N. (2014). Augmented reality in healthcare education:
an integrative review. PeerJ, 2, e469.
Ash, J. S., Berg, M., & Coiera, E. (2004). Some unintended consequences of information
technology in health care: the nature of patient care information system-related errors. Journal
of the American Medical Informatics Association, 11(2), 104-112.
Harrison, M. I., Koppel, R., & Bar-Lev, S. (2007). Unintended consequences of information
technologies in health care—an interactive sociotechnical analysis. Journal of the American
medical informatics Association, 14(5), 542-549.
Recommended Books:
Patel, V., Kannampallil, T., & Kaufman, D. (2015). Cognitive Informatics for Biomedicine: Human
Computer Interaction in Healthcare. Springer.
Ma, M., Jain, L., & Anderson, P. (2014). Virtual, Augmented Reality and Serious Games for
Healthcare. Springer Science & Business.
Dedicated Courses:
Classes on network security are often most concerned with the algorithms and encryption methods that
must be safeguarded to ensure secure communications. But the most secure communication strategies
in the world are weakened if people just refuse to use them. And historically, we've found people have
very little patience for instances where security measures get in the way of them doing their tasks. For
security to be useful it has to be usable. If it isn't usable, people just won't use it. XEI can increase the
usability of security in a number of ways. For one, it can make those actions simply easier to perform.
CAPTCHAs are forms that are meant to ensure users are humans. And they used to involve recognizing
letters in complex images, but now they're often as simple as a check-box. The computer recognizes
human-like mouse movements and uses that to evaluate whether the user is a human. That makes it
much less frustrating to participate in that security activity. But HCI can also make security more usable
by visualizing and communicating the need. Many people get frustrated when systems require
passwords that meet certain standards or complexity, but that's because it seems arbitrary. If the
system instead expresses to the user the rationale behind the requirement, the requirement can be
much less frustrating. I've even seen a password form that treats password selection like a game where
you're ranked against others for how difficult your password would be to guess. That's a way to
incentivize strong password selection making security more usable.
Recommended Books:
Garfinkel, S. & Lipford, H. (2014). Usable Security: History, Themes, and Challenges Morgan &
Claypool Publishers.
Tryfonas, T. & Askoxylakis, I. (Eds.). (2014). Human Aspects of Information Security, Privacy, and
Trust. Springer.
Dedicated Courses:
Video games are one of the purest examples of HCI. They're actually a great place to study HCI, because
so many of the topics we discuss are so salient. For example, we discussed the need for logical mapping
between actions and effects. A good game exemplifies that. The actions that the user takes with the
controller should feel like they're actually interacting within the game world. We discussed the power
of feedback cycles. Video games are near constant feedback cycles as the user performs actions,
evaluates the results and adjust accordingly. In fact, if you read through video game reviews you'll find
that many of the criticisms are actually criticisms of bad HCI. The controls are tough to use, it's hard to
figure out what happened. The penalty for failure is too low or too high. All of these are examples of
poor interface design. In gaming though there's such a tight connection between the task and the
interface. Their frustrations with a task can help us quickly identify problems with the interface.
Recommended Books:
Bogost, I. (2007). Persuasive Games: The Expressive Power of Videogames. MIT Press.
Bogost, I. (2011). How to Do Things With Videogames. University of Minnesota Press.
Dedicated Courses:
Introduction
We’ve reached the end of our HCI course. To close things off, let’s briefly recap what we’ve covered.
The purpose of this lesson isn’t just to give you an inventory of the course content. The real reason is to
give us an excuse to have this cool inception effect over here.
No, really it’s to repeat it again to load it into your working memory one more time. After all, we know
that the more often you load some content into short-term memory, the more strongly it remains in
Our first unit was the unit on design principles. To start that unit off, we began by investigating the
process of identifying a task. We discussed how a task is not just the actions that a user performs, but
it’s the combination of their motivations, their goals, the context, and so on. We emphasized that we’re
not just interface designers, we’re task designers. We design tasks, and then design the interfaces that
make those tasks possible. We then explored three views on the user’s role in a task. We might view
them as an information processor, like another computer in the system. We might view them as a
predictor, someone operating a mental model of the system. We might view them as a participant,
someone working in a larger context beyond just our interface. Finally, we discussed how the views we
take inform the designs we create.
We started the first unit by discussing feedback cycles. Feedback cycles, as we discussed, are ubiquitous
in all areas of life. They’re how we learn and adapt to our environments, and so they’re also how we
learn and adapt to the interfaces available to us. We then described the two parts of feedback cycles.
Gulfs of execution covered how users go from personal goals to external actions. Gulfs of evaluation
covered how the users then evaluated whether the results of those actions met their goals. We can
describe basically all of HCI as designing ways to bridge these two gulfs: helping users accomplish their
goals more easily, and helping users understand that their goals have been accomplished more quickly.
We then moved on to direct manipulation, which was one way to create interfaces with very short gulfs
of execution and evaluation. Direct manipulation involved creating interfaces where the user felt like
they were directly interacting with the object of the task. Instead of typing commands or selecting
operators, they would more physically participate with the interface. The goal of this, and really of any
good interface design, was to create interfaces that become invisible. Invisible interfaces are those that
allow the user to focus completely on the task instead of on the interface. We noted that nearly any
interface can become invisible when the user has enough practice and expertise, but our goal is to
create interfaces that vanish sooner by good design.
We’re discussing human-computer interaction, and that means we have to understand the human
portion of the equation. So, we also took a crash course toward some basic psychology. We broke the
human down into three systems: perception, cognition, and the motor system. With perception, we
covered the strengths and limitations of the human visual, auditory, and kinesthetic senses. We
discussed how each can be useful for different kinds of information. Then, we discussed some of the
limitations to human cognition. We focused on memory: how many things can be stored at a time, and
how things can be stored more permanently. We also discussed the notion of cognitive load. We
focused especially on how we should use our interfaces to reduce the user’s cognitive load. Finally, we
discussed the limitations of the human motor system, especially how those limitations change with age
or in the presence of distractions. We’re designing for humans, so these limitations and advantages are
key to how we design our interfaces.
Human-computer interaction has a long and rich history, initially drawing from human factors
engineering before becoming a field of its own. During that history, it’s developed lots of principles and
heuristics for how to design good interfaces. Lots and lots and lots, in fact. Literally thousands. In this
lesson, we covered fifteen of the most significant ones, drawn from four sets of design principles. We
covered the principles of Don Norman, Jakob Nielsen, Larry Constantine, Lucy Lockwood, and the Centre
for Universal Design. Among these, I would argue the most significant principles were Affordances,
Mappings, and Constraints. Affordances are parts of interfaces that, by their design, tell the user what
to do. A good mapping tells the user what the effect of that interaction will actually be. Constraints
ensure that the user only chooses to do the correct things. With those three combined, as well as the
other heuristics, we can create interfaces that vanish between the user and the task very quickly.
Every one of our users has some mental understanding of their task, as well as where our interfaces fit
in that task. We call that their mental model. They use that mental model to simulate and predict the
effects of certain actions. Our goal is for the user’s mental model to match the reality of the task and
the interface. To accomplish that, we try to design representations with clear mappings to the
underlying task. That’s how we can ensure the user’s mental model of the system is accurate and
useful. We discussed mistakes, which are errors that occur based on inaccurate mental models, and we
discussed slips, which are errors that occur despite accurate mental models. We also talked about a
couple of the challenges that can arise in trying to help users build accurate mental models. One of
those was called expert blind spot, which occurs when we lose sight of our own expertise and forget
what it’s like to be a novice. And the other was learned helplessness, which is when users learn that
they have no real ability to accomplish their goals because of a broken feedback cycle. In designing
representations that lead to accurate mental models, we need to make sure to avoid both of these.
We’ve discussed repeatedly that HCI is in large part about understanding tasks. As designers, we design
tasks that feature interfaces, not just interfaces alone. To accomplish that, it’s important to have a very
clear understanding of the task for which we’re designing. Toward that end, we discussed task analysis.
Task analyses are ways of breaking down tasks into formal workflows to aid the design process. We
covered two general kinds of task analyses. Information processor models, like the GOMS model, focus
on the user’s goals, operators, and methods. They’re concerned primarily with what we can observe.
Cognitive task analyses, on the other hand, try to get inside the user’s head and understand the thought
process in the task as well. Both of these approaches are valuable to designing good tasks with usable
interfaces.
Earlier, we discussed the idea of cognitive load. Cognitive load was the principle that humans have a set
amount of cognitive resources, and if they’re overloaded, their performance suffers and they get
frustrated. So, how do we reduce the user’s cognitive load? Well we can make the task easier, sure, but
we can also add to their cognitive resources. That’s the principle of distributed cognition: the
interactions of humans and artifacts together have more cognitive resources than individuals. Devices
and interfaces can exhibit cognitive properties like memory and reasoning, offloading those demands
from the user. We also discussed three related theories: social cognition, situated action, and activity
theory. All three of these put a strong emphasis on the context of a task, whether it be the physical
context, social context, or societal context.
The design principles unit of the course covered the fundamental principles and ideas developed over
decades of work in this space. However, we can’t create good interfaces just by applying old principles
to new problems. Those old principles can help us make progress much faster, but to design good
interfaces, we have to involve the user. That’s perhaps the most important principle of HCI: user-
centered design. User-centered design advocated keeping the user at the heart of all our design
activities. For us, that isn’t just the person using the tool, but it’s also the people affected by the tool’s
very existence. To keep the user in mind, we use an iterative design life cycle that focuses on getting
feedback from the user early and often. The methods of that life cycle were the core of the methods
unit of the course.
When we’re doing research in HCI, we have access to some pretty sensitive personal data about our
participants. There are huge ethical considerations around privacy and coercion that we have to keep in
mind when participating in the design life cycle. So, we discussed the role of institutional review board,
or IRB, for university research. They oversee studies and make sure we’re preserving our participants’
rights. They also make sure that the benefits of our research outweigh the risks, and as part of that,
they help ensure our methods are sound enough to have benefits in the first place. We also discussed
how industry doesn’t have the same kind of oversight. However, some companies have partnered with
universities to participate with their IRBs, while others have formed internal IRBs. All of this is driven by
t he need to preserve the rights of our users. That’s a key part of user-centered design.
The first stage of the design life cycle was needfinding. Needfinding was how we developed a keen
understanding of the needs of our users. One of the biggest mistakes a designer can make is assuming
they already understand the user and the task before ever interacting with them. There are several
questions about the user we need to answer before we’re ready to start designing. So, to get a good
understanding of the u-ser and the task, we discussed several methods. We might start with methods
that have little direct interaction with users, like watching them in the wild or trying out the task
ourselves. We might use those to inform more targeted needfinding exercises, like interviews, focus
groups, and surveys. By combining multiple needfinding approaches, we can build a strong model of the
user and the task that will help us design usable interfaces.
Once we have a solid understanding of the user and the task, we want to start brainstorming possible
designs. The important thing here is to make sure we don’t get fixated on one idea too early. That sort
of tunnel vision risks missing lots of fantastic ideas. So, we want to make sure to engage in a well-
defined brainstorming process. I recommend starting that with individual brainstorming, then setting
up a group brainstorming session that ensures everyone’s ideas are heard. From there, we proposed
some different ideas on how to explore those design alternatives, through methods like personas,
scenarios, and timelines. Our end goal here was to arrive at a set of designs worth moving on to the
prototyping stage.
In user-centered design, our goal is to get user feedback early and often. So, once we have some design
alternatives, our goal is to get them in front of users as quickly as possible. That’s the prototyping stage,
where we take those design alternatives and build prototypes we can show to users. At first, those
prototypes might be very low-fidelity. We might start by just describing or drawing the designs. Those
are verbal or paper prototypes. We want to keep our designs easy to revise. We might even revise the
prototypes live while working with users. As we get more and more feedback, we build higher-fidelity
prototypes to explore more detailed questions about our designs. We might use wireframes or set up
live simulated demos. At every stage of the way, we design our prototypes in a way to get the user’s
view and inform the next iteration of the design life cycle.
The goal of the design life cycle is to get frequent user feedback. Or, to put it differently, to frequently
evaluate our interface ideas. Frequent, rapid feedback cycles are important for users of our interfaces,
and they’re also important to us as designers of interfaces. That’s where evaluation comes into play.
Once we’ve designed an interface, whether it’s just an idea in our head or a full-fledged working version,
it’s time to evaluate it with users. Early on, that may be qualitative evaluation to get the full picture of
the user experience. Later, that might be empirical evaluation, to more formally capture the results of
the interface. Along the way, we might also employ predictive evaluation to try to anticipate how users
will react to our designs. These methods are the foundation of user-centered design: design that
features frequent user evaluation of our ideas.
Traditional HCI can be a slow, deliberate process, fed by constant interaction with users as we slowly
ramp up the fidelity of our prototypes. In the past, that was because every phase of the process was
expensive: development, distribution, evaluation, and updates were all expensive. But now, in some
contexts, those steps have become much, much cheaper. Some web development can be done with
drag-and-drop interfaces. We can now distribute applications to millions of users essentially for free.
We can pull back enormous quantities of live data from them. We can push updates to all of them in
real time. Given all that, sometimes it might be more prudent to put together working versions quickly
to start getting real user data faster. So, we also discussed agile methods for development while
keeping an eye on the design life cycle. These methods aren’t appropriate for all contexts, especially
those with high costs of failure, but for many contexts they can really increase the iteration speed of our
design life cycle.
Over the course of our conversations, I’ve asked you to revisit the area of HCI in which you’re most
interested during each topic. I’ve asked you to brainstorm how the various design principles and
theories apply to the area you chose. I’ve asked you to think of a design life cycle that would support
developing in that chosen application area. We’ve also given you lots of information to read about your
chosen area. You have all the tools necessary to start developing. I’m looking forward to seeing what
you come up with.
One of the famous models for communicating is to tell them what you’re going to tell them, tell them,
then tell them what you told them. That’s what we’ve done, at several levels of abstraction. At the
beginning of the course, we told you the overall structure of the course. Within each unit, we outlined
the content of the unit. Within each lesson, we previewed the content of that particular lesson. Then
we delivered the content. Then, we summarized each the lesson. Then, we summarized each unit.
Now, we’ve summarized again the course as a whole. So, we’re done, right? ...not quite. There are two
other useful things to cover: the closely related fields to HCI, and where you might want to go next.
Introduction
At the beginning of our conversations, we talked about how HCI is part of a broader hierarchy of fields.
It draws a lot from Human Factors Engineering in many ways, and in fact it’s Human Factors Engineering
applied specifically to software.
Human-computer interaction was concerned with the interaction between users and tasks as mediated
by things equipped with computing power. Nowadays, that’s more and more common. It’s a relatively
recent phenomenon that things like watches and cameras were examples of computers. When a device
doesn’t have computational power behind it, though, there are still design considerations to make. In
fact, many of our principles and many of our methods apply just as well to non-computational
interfaces. What makes human factors engineering particularly interesting is that it deals with more
constraints, physical constraints. Things like the height of the user or the size of a hand come up in
human factors engineering. What’s interesting, though, is that as many devices start to have
computational resources added to them, human factors engineering and human-computer interaction
will start to interact more and more. My watch, for example, has no computational resources to it. It’s
completely within the human factors area. But smartwatches see some interesting interactions
between human factors and HCI. Human factors determines the size of the watch, which determines
the size of the battery or the additional sensors that can be placed inside. Those things then influence
what we can do on the HCI side. So, if you’re dealing with anything related to ubiquitous computing,
wearable devices, contextual computing, human factors engineering is a great place to brush up. So,
we’ll throw you some resources in the notes below.
Likely the most significant subfield of HCI is user interface design. Colloquially, user interface design
most often refers to design between a user and some rectangular on-screen interface via a computer, a
laptop, a smartphone and so on. For a long time, user interface design and HCI were pretty much
synonymous because the vast majority of interaction between users and computers happened via a
rectangular on screen interface connected to a mouse and keyboard. It’s relatively recent that we’ve
started to interaction break out of that literal box. And to a certain extent, the term user interface
design captures this as well. Interface doesn’t have to mean a screen. Colloquially, though, I find most
classes on UI design focus on designing screens and interacting with screens. That’s a massive and well-
developed field, and in fact a lot of this course’s material comes from the user interface design space.
There are some more narrow things user interface design is concerned with, though. UI design has its
own set of design principles that apply more narrowly to the design of traditional computer software or
web sites.
Some of these principles guide how people visually group things together. These are called Gestalt
grouping principles. When things are close together we mentally put them into groups. You likely see
two groups of three and one group of six. When there are implicit lines we see a continuation You likely
Even after they stop moving, you likely still see the diamond that was formed by the moving circles.
These Gestalt principles in some ways apply the user interface design’s emphasis on designing with good
grids in mind, just the way magazines and newspapers have done for centuries. We've already
discussed this principle a lot and part of it is because this new interface leverages the analogy to the old
interface but it's value isn't just in the analogy. It's value is also in the way it instantiates the same
Gestalt principles that guided the layout of a newspaper
For a lot of people, HCI and user experience design are essentially the same thing. For me, UX design is
a little bit more prescriptive, while HCI is a little more descriptive. HCI describes how people learn and
interpret interfaces, while UX design prescribes what you should do with that information. But in many
ways, that’s just applied HCI: the content is the same, the question is how you use it. If you choose to
continue your studies into UX design, though, there are a few new things you’ll encounter. You’ll see an
increased focus on the mood and joy of the user: we want them to have a great experience, not just
accomplish a task. You’ll see more attention paid to the user’s motivations behind engaging with the
interface. You’ll also see a greater emphasis on the interaction between the user, the interface, and the
context all around them, as these are all parts of the user experience. These are all things we’ve talked
about, but user experience gets into prescribing these with more specificity. So, if you want some more
information on user experience design, we’ll put some resources in the notes below.
Early in our conversations, we described how HCI is generally about an interaction between design and
research. We use research findings to inform our designs, and we use the success of our designs to
create new research findings. As you explore more HCI, though, you’ll find that there’s plenty of room
to specialize in one side or the other. Professional designers focus almost exclusively on the creation
side, but there are lots of people who focus exclusively on the research side of HCI. Many universities
like Georgia Tech and Carnegie Mellon have research faculty dedicated to understanding the way people
interact with technology at individual, group, and societal levels. So if you’re more interested in
understanding how people interact with interfaces than in designing new ones, you might be interested
in taking more of a research bent on HCI. This class is built from the research perspective more than the
design perspective, so you’ve already got a great foundation. We’ll add some links below if you want to
explore some more.
HCI research broadens to a field called human-centered computing. While much of HCI research is
concerned with the immediate interactions between people and computers, human-centered
computing is interested more generally with how computers and humans affect each other at a societal
level. There’s a lot of fascinating research going on in this area. Some of the questions people are
addressing are: How did participants in the Arab Spring using tools like Facebook and Twitter to
coordinate? How can information visualizations be employed to help people better understand their
diet or energy usage? How does access to computing resources influence early childhood education?
Now notice that these issues don’t necessarily involve designing new interfaces or creating new tools.
They involve looking at the way people interact with computers more generally—and not just specific
tools, but the ubiquity of technology as a whole. If you’re interested more in this, we’ll put some
materials in the notes below.
When we described mental models, we were actually briefly touching on a deep body of literature from
the cognitive science field. Cognitive science is the general study of human thought, mental
organization, and memory.
Now, cognitive science isn’t a subfield of HCI, but HCI informs a good portion of cognitive science
research.
We can design interfaces that assume humans think and process in a certain way.
When we discussed feedback cycles, we mentioned that the user experience applies not only at the
individual level, but also at the group level. Distributed cognition, too, was interested in how
interactions between people can be mediated by interfaces, and how the output and knowledge of
those interactions can't be attributed narrowly to one particular part of the system but rather to the
system as a whole. The ubiquity of human interaction and the potential of computers to mediate
interaction between people gives rise to fields that investigate collaboration across interfaces. These
fields ask the question like: how can computers be used to allow people to collaborate across distance
and time? And how can computers be used to enhance the collaboration of people working together at
the same place and at the time? These fields look at how computers can support things like cooperative
work and collaborative learning. For example, how does Wikipedia enable people across enormous
variations in location and time to work together to capture knowledge? Or how do online courses allow
teachers and students to interact and learn asynchronously across distances? Or how can computers be
used to facilitate conversations between people with different backgrounds, expertises, or even
languages? These are pretty well-developed fields, so if you’re like to learn more, we’ll put some more
information in the notes below.
To close out, my work is at the intersection of artificial intelligence, human-computer interaction, and
education. My research is largely on how to use AI to create valuable learning experiences. Setting
aside the education part for a second, though, there is also a rich interaction between artificial
intelligence and human computer interaction in the form of intelligent user interfaces. This field looks
at how we can apply AI techniques to adapting user interfaces to their users. Now an infamous example
of this is Clippy, the Microsoft office assistant: he tried to infer what you were working on and give you
in-context feedback on it. Intelligent user interfaces have come a long way since then, though. Google
Now, for example, is consistently trying to learn from your routine and give you information when you
need it. One of my favorite experiences with intelligent user interfaces came from the interaction
between Google Maps, Gmail, and Google Calendar. Google Calendar had automatically imported by a
restaurant reservation I had made from Gmail, along with the location information. Then, Google Maps,
knowing where I was, detected that there was unusual traffic between me and the reservation, and
buzzed me to let me know when to leave to arrive on time. I hadn’t checked traffic, but I was on time
for my reservation because of the intelligence of that user interface. It knew what I needed to know and
when I needed to know it. So if you’d like to hear more about the overlap between artificial intelligence
and human-computer interaction, we’ll put some information in the notes below.
Human-computer interaction is a massive field with lots of sub-domains. This course has been a
combination of the fundamental methods and principles of HCI, but there are lots of directions to go
from here. In this lesson, I’ve attempted to give you some idea of where you might look next.
You might be interested in the research side of HCI and exploring more about how technology influences
the way people think and act.
You might be interested in the design side and creating excellent user interfaces and experiences.
Or you might be interested in artificial intelligence and designing interfaces that adapt to their users’
needs. Whatever your interest, there’s a rich amount of content in HCI in front of you. The last thing
we need to think about is: how to best get that content.
Introduction
To close our journey through HCI, let’s take a look at where you might go from here. We've already
talked about the different application areas of HCI, like virtual reality and educational technology, and
those certainly apply to what you could do next. We've also talk elsewhere about the deeper topics in
HCI you might investigate, like intelligent user interfaces or human-centered computing. But to close,
let’s talk about the formal educational steps you might take going forward to get deeper into HCI.
The quickest way to get involved in more HCI if you’re at Georgia Tech is to see about joining a
professor’s research team. On the Georgia Tech HCI faculty listing, you’ll find listings for every HCI
faculty member along with their research interests. Find someone who’s working on the same kinds of
things that you’re working on, and see if they’d like to let you join one of their ongoing projects.
Professors are extremely busy people, but one of the things I love about Georgia Tech is the school’s
focus on fostering student education and involvement in addition to fostering quality research. So, it’s
quite likely you’ll find someone willing to let you join up and prove that you can contribute. Let’s check
out a list of what kinds of projects are going on.
New research projects are coming up all the time, and new faculty are joining every year—that’s why
I’m not listing names or specific projects. But if any of these domains sound interesting to you, check
out the HCI faculty web sites and see if there’s anything to which you’d like to contribute.
HCI is a popular topic in emerging MOOCs as well. So if you’re looking to continue your HCI education in
a slightly more formal way but don’t want to shell out the money for a formal degree, there are several
great places you can start.
First, Interaction-Design.org is a treasure trove of HCI information. It has a fantastic free open access
body of literature to study independently, including additional information on many topics we’ve
covered in this course. The site also runs quite a few of its own closed courses as well.
For more traditional MOOCs, Udacity has free courses on HCI as it applies to product design and mobile
app design, as well as a MOOC by Don Norman based on his famous book, Design of Everyday Things.
On Coursera, Scott Klemmer, one of the most prominent HCI researchers, has a specialization entitled
Interaction Design. The University of Minnesota also has a specialization on UI design that covers a lot
of the same topics that we've covered here, developed in part by another Georgia Tech alum, Lana
Yarosh.
Georgia Tech is planning a specialization on human computer interaction as well that might be live by
the time you see this. And that’s all just core HCI courses—all of these providers and more have courses
on applied areas of HCI like video game design, educational technology, virtual reality, and more. Most
of these courses are available for free to watch. Note also that new MOOCs are coming online all the
time, so there are quite likely some that I haven’t mentioned here. So, check out Udacity, check out
edX, check out Coursera, check out FutureLearn. Also check out Class-Central.com for a list of MOOCs
across several different platforms. Or, just Google HCI MOOC. This space is so new and fast-paced that
by the time you view this video, half the MOOCs I’ve mentioned might be gone and twice as many new
ones may have been created. We’ll try to also keep an updated list of available courses in the notes
below.
If you want to take it a step further, though, you might get an actual Master’s in Computer Science
specializing in HCI. If you’re a Georgia Tech student watching this, you might already be working
towards a Master’s in CS, and you might be wondering if the HCI specialization is available online. Right
now while I’m recording this, it’s not, but I’ll pause for a second so Amanda can let us know if that's
changed. If you’re an on-campus student watching this, or if you’re watching just an open MOOC then
you might want to look into an MS in CS with a focus on HCI. Most MSCS programs I’ve seen have an
HCI specialization or at least an HCI focus. The specialization lets you go far deeper into the field, taking
several classes on topics like core HCI, user interface design, educational technology, mixed reality
design, information visualization, and more. We’ll gather a list of schools with MSCS programs with HCI
specializations and provide it in the notes below.
If you already have a strong background in CS, you might want to go all the way to getting a Master’s
specifically in HCI. These programs aren’t as common as MSCS programs with HCI specializations, but
many universities do have them, including Georgia Tech, Carnegie Mellon, the University of Washington,
the University of Maryland, Iowa State University, and Rochester Institute of Technology. I myself
completed the MSHCI program here at Georgia Tech before starting my PhD. In focusing an entire
Master’s degree on HCI, you’ll find you have even more time to spend getting into the relationship
between HCI and other fields. At Georgia Tech, for example, the MS-HCI program has specializations in
interactive computing, psychology, industrial design, and digital media. That allows the flexibility to
focus on different areas of HCI, like how it integrates with physical devices in industrial design or how it
helps us understand human cognition in psychology. Most Master’s programs in HCI that I’ve seen are
also heavily project-focused. Carnegie Mellon provides sponsored capstone projects from industry, and
every student in Georgia Tech’s MSHCI program completes a 6-credit hour independent project. So, if
you’re really set on moving forward with a career in HCI, a dedicated Master’s in the field is a great way
to move forward.
If research really is your calling, though, then you’re going to want to move on to a PhD, which can have
a specialization in HCI as well. A PhD isn’t for everyone. There’s a tendency to view it as the next logical
step after a Master’s degree, but a PhD program is far more of an apprenticeship program. At Georgia
Tech at least, the PhD program actually requires fewer classes to complete than a Master’s, but that’s
because 90% of your time is spent working on research closely with a faculty advisor. But if you’re very
interested in research, the PhD may very well be the way to go. As a reminder, here were some of the
project areas that have ongoing research here in HCI at Georgia Tech.
A PhD program is a huge commitment—it’s your full-time job for typically five years. It’s absolutely not
for everyone… but it absolutely is for some people. So, if you’re really passionate about what you’ve
learned here, a PhD focusing on HCI may be the way to go.
Finally, the highest level of educational achievement in HCI is likely a PhD specifically in HCI or similar
fields. Here at Georgia Tech and at schools like Clemson and Maryland, that’s a PhD in human-centered
computing, which is actually what my PhD was in. Other schools have PhDs in similar fields—Carnegie
Mellon, Iowa State, IUPUI, and others have a PhD programs in HCI as well. Pursuing a PhD in HCI or HCC
lets you take a deep dive into how humans and computers interact in our modern world. You might dive
deep into artificial intelligence and cognitive science, using AI agents to reflect on how humans think.
You might delve into learning sciences and technology, studying the intersection between computers
and education in depth. You might focus on social computing and how online communities function or
how social media is changing our society. Or you might stick closer to the core of HCI and focus on how
people make sense of new technologies. You might answer questions like, how do we give immediate
feedback on motion controls? Or how do we adapt a user interface to the user’s mood? There are
enormous questions to answer, and I’m excited to see some of you move on to change the world in very
profound ways.
No matter what you do next, I hope you’ve enjoyed this foray into the world of human-computer
interaction. If you end this course feeling like you actually know less than you started, then that’s
perfectly fine: my goal was not only to teach you about HCI, but also to help you understand how big
this community is. I look forward to seeing y’all go further in the community and make a difference in
the world. To close, I have to give a special thank you to Georgia Tech for creating the fantastic online
Master’s program in which I’m developing this course. And I’d also like to thank the HCI faculty at
Georgia Tech for letting me be the one to record this and bring it to you. And most of all, I’d like to
thank Amanda and Morgan, my partners in creating this course, for being totally responsible for how
amazing it’s looked. I like to think this isn’t just a course about human-computer interaction, but it’s
also an example of human-computer interactions: humans using computers to teach about HCI in new
and engaging ways. Thank you for watching.