Does Your Computer Understand You?: Making Computers That Treat You Right
Does Your Computer Understand You?: Making Computers That Treat You Right
I ssue 1 5
W elcome to cs4fn
Delicious computing
Imagine being able to pick up an ordinary banana and use it as a phone.
That's part of the vision of 'invoked computing', which is being developed by Japanese researchers. A lot of the computers in our lives are camouflaged smartphones are more like computers than phones, after all but invoked computing would mean that computers would be everywhere and nowhere at the same time. The idea is that in the future, computer systems could monitor an entire environment, watching your movements. Whenever you wanted to interact with a computer, you would just need to make a gesture. For example, if you picked up a banana and held one end to your ear and the other to your mouth, the computer would guess that you wanted to use the phone. It would then use a fancy speaker system to direct the sound, so you would even hear the phone call as though it were coming from the banana. Sometimes you might find yourself needing a bit more computing power, though, right? Not to worry. You can make yourself a laptop if you just find an old pizza box. Lift the lid and the system will project the video and sound straight on to the box. At the moment the banana phone and pizza box laptop are the only ways that you can use invoked computing in the researchers' system, but they hope to expand it so that you can use other objects. Then, rather than having to learn how to use your computers, your computers will have to learn how you would like to use them. And when you are finished using your phone, you could eat it.
www.cs4fn.org
to fit their hands, rather than change the buttons to suit their user preferences. It will be interesting to see whether F1 steering wheels become simpler as car designers discover human-computer interaction, or whether drivers will just have to get used to having as much to look at inside the car as outside.
always wins
Researchers in Japan have made a robot arm that always wins at rock, paper, scissors. Not with ultra-clever psychology, but with old-fashioned cheating. The robot uses high-speed motors and precise computer vision systems to recognise whether its human opponent is making the sign for rock, paper or scissors. One millisecond later, it can play the sign that beats whatever the human chooses. Because the whole process is so quick, it looks to humans like the robot is playing at the same time. See for yourself by going to the magazine+ section of www.cs4fn.org and watching the video of this amazing cheating robot.
cs4fn
eecs.qmul.ac.uk
www.cs4fn.org
cs4fn
eecs.qmul.ac.uk
Proper pumping
Humans are imperfect. There are some things we always find difficult, and theres no way to change it. Interaction designers need to accept those things, and then design ways around them. For example, humans are not that great at doing many jobs at the same time. Sure, we can multitask if we really have to, but it often takes longer and we are prone to making more mistakes. One place that mistakes can be really dangerous is a hospital. Which means that its probably better for doctors and nurses to do one thing at a time.
One thing at a time How do you make that happen? This is where designers can almost be like clever tricksters. People will often choose the path of least resistance, so if you can find a way to make your desired method the easiest method, youre ahead of the game. That was the idea behind a study by Jonathan Back, Anna Cox and Duncan Brumby at University College London. They work on a project called CHI+MED, which is trying to find ways to design safer medical devices. They were working on the pumps that deliver medicine into patients bodies through an intravenous drip. They believed that it was safer for nurses to program those pumps one at a time: they make fewer mistakes that way, and mistakes with medicine can be fatal. But could the researchers find a way to coax nurses into programming them in sequence rather than all together? Design detectives They found a clue in the way nurses get their dosage information. They read the dosage off of a prescription form, then program it into the machine. Another clue came from the design of many pumps in hospitals. They are designed to stack one on top of another, to save space. What would happen is that nurses would bring their prescription form right next to the stack of pumps, and program the machines all together. Once they did a bit of programming on the first pump they would have to wait for it to do some work before the second step. So the nurses would go on to program a bit of the second pump, then come back to finish the first pump later. It seems more efficient, but it meant more mistakes. Sometimes nurses would get mixed up and forget steps, like opening the clamps that allow the medication to flow. Improving nurses form The CHI+MED team wondered if the solution was to do what seems less efficient and move the prescription form away from the stack of pumps. That way, the nurses would have to remember the information they were reading from the forms. Suddenly it was easier to program one pump at a time than to do them together. Their test subjects looked at the form, memorised the information, programmed the pump, released the clamp, and went back to the form to memorise the information for the next pump. Fewer mistakes! In the real world, you cant really force nurses to keep their forms on the other side of the room from the pumps. But the experiment gave designers an important clue to reducing mistakes. If they make it easier to program one pump at a time, patients will be safer. And it goes to show that good design is about knowing how people work especially when were not perfect.
www.cs4fn.org
cs4fn
eecs.qmul.ac.uk
T asty behaviour
Its hard to be good, especially when it comes to eating right. Sadly whats tasty isnt always whats healthiest. Sometimes it can help to have a little nudge in the right direction. Thats what computer scientists are trying to do in three new systems. By training you, enticing you or downright fooling you, each invention on these two pages uses computer interaction to help users eat better.
www.cs4fn.org
Virtually full
Heres a surprising trick that makes you feel like youve had your fill of food: make the food look bigger. Researchers in Japan have developed a virtual reality system that makes you see your food as though its a larger portion. Once youve eaten it, you feel as though youve eaten more than you actually have. The researchers think they could use their system to help people eat less. Humans use clues to guess how much food to eat. Psychologists have found that one of the clues we use is size. If the food portion looks large compared to the objects around it, we assume the portion is bigger. For example, experiments have found that the size of a bowl can affect how much soup it takes to make someone feel full. The researchers wondered if people would eat fewer cookies if they thought the cookies were bigger than they actually were. To get the effect they used a virtual reality system. Volunteers wore goggles with a video camera on them. The camera was connected to little computer screens inside the goggles, showing a view of what was in front of them. The goggles were also connected to a computer that could control what the volunteer saw. Researchers gave the volunteers a plate of Oreo cookies and told them they could have as many as they liked. What the volunteers didnt know was that sometimes the virtual reality changed the size of the cookie so that it looked like they were holding a bigger (or sometimes a smaller) Oreo. The researchers then kept track of how many cookies each volunteer ate before feeling full. The researchers found that their system worked. By changing the apparent size of the cookies, they could indeed affect how many cookies the volunteers had to eat before they felt full. And the virtual reality system did its trick flawlessly: no one suspected that the size of the food was being changed. One problem, though, was that in order to keep the illusion going, you cant change the size of the cookie very much. If you think about it, imagine if you tried to make the cookie really huge like the size of a pizza. You would have to deform the cookie (and the hand holding it) so much that it would be obviously fake. So computer science will have to be happy with just making our cookie binges a little bit smaller.
cs4fn
eecs.qmul.ac.uk
Designer baby
How user-friendly is a baby? Paul Curzon, father to a newborn son and computer scientist at Queen Mary, University of London, investigates.
I recently acquired a new interactive gadget, better known as a newborn baby. How well-designed is he, I wondered. In particular how usable is he for a novice like me? Time to do a usability analysis. Heuristic evaluation is one of the simplest and most commonly used methods to evaluate the usability of a new system. It involves checking the system against a set of design principles: things like the system should always keep users informed of what is going on. The usability expert judges the system under evaluation against each principle making a list of potential problems and suggests improvements to fix each. Here goes for our baby... Whats going on? The first principle to check, 'visibility of system status', is about whether you can tell what is going on inside a gadget as you interact with it. It's all about 'feedback'. Every interactive gadget has a hidden internal state to keep track of what it is doing, and users need to be able to tell what that state is when it matters. The feedback should not only be understandable but also be timely. It's likely to be useless if it comes too late. So how does the baby do for visibility? Well there are five or six main internal states that seem to matter: happy, tired, hungry, in pain and needing his nappy changing. He certainly gives feedback about all of them. It's clear when he is happy. Its all smiles, gurgling and cooing. When he enters one of the other states you definitely know it. Speak the lingo The second design principle to check is about the match between the system and the real world. This is a convoluted way of saying it must be easy to understand. Does it use the same language and concepts as you or do you have to learn what everything means first to use it? Does it follow standard conventions? Hmm! The baby doesnt do so well here. He uses screams. It certainly would help if there was a bit of English there from the start. Just a few words like hungry, poo, and so on pre-installed when it came out of the box would be a big improvement. He screams his head off. The feedback is timely too! The moment he leaves the happy state he starts to scream and he wont stop unless you fix the problem, however long it takes. So far so good! There is a problem though: for a novice at least, there seems to be only one form of feedback for all non-happy states. It seems to be impossible to tell the states apart, and so to know what the right thing is to do: feed him? Change his nappy? Rock him to sleep? Kiss him better? All you can do is guess. Our designer baby definitely needs something more. A screen on the forehead might be one option. It could flash a message that just says what the problem actually is. Simple. Control! The third principle is about user control and freedom. The user of the system should feel they have control. They should feel things happen when they want them to, rather than the system just doing things behind their back without them being able to stop it. There should be simple controls to get the state to where you want it, too like an undo button for when something goes wrong and a home button that takes you back to the start rather than you having to navigate there. What can I say? This doesnt seem to have been taken into account at all in the babys design. In fact, since we had the baby, my whole life is out of control, never mind the baby! Focusing on the baby though, I seem to have little control of it at all. Things just happen and I have to react, with no easy way of gaining back control. Take poo! You try and gain control by changing the baby before it leaks and what happens? Five minutes later, when its most inconvenient, he goes and poos. You have to change him all over again. Some of the help books suggest you can gain control by fixing feeding times and sleep times, ignoring the screams of anger that result. After a while (and we are talking months of frazzled nerves rather than hours here) the baby will fall in to the routine and all will be fine. Why on earth didnt the designer just build that behaviour in as standard? There definitely should be a home button. It would just put the baby back in the happy state it was in when it first woke up, allowing you to start the day again. A poo now button would also be good. In fact that is so obvious I cant imagine why the designer never thought of it. He or she must have never had a baby themselves.
10
www.cs4fn.org
Same again? The next principle is about consistency and standards. The system shouldnt do different things at different times that mean the same thing. At first sight this seems to be ok. The baby just screams whenever he wants something. Thats pretty consistent! However, with more detailed observation its clear there is more going on. When he is tired, sometimes he screams. At other times he starts to angrily throw his toys about. At calmer times he starts to twist his hair with his fingers, or just reaches up for a cuddle, all meaning I need to sleep.
We want consistency for our designer baby. The designer should pick one way to communicate the message and stick to it. My preference would be for using the reaching up for a cuddle signal. We are focusing on usability here rather than user experience. However, user experience is important too. If the baby always used the want cuddle signal rather than the scream signal whenever he wanted to communicate Im tired it would certainly improve my experience! User experience is good for business if you want repeat sales. If the designer had focused a bit more on experience it would have increased the likelihood I would get another.
To read more on the usability of Pauls baby, look in the magazine+ section of our website, www.cs4fn.org.
Our designer baby definitely needs something more. A screen on the forehead might be one option. It could flash a message that just says what the problem actually is. Simple.
cs4fn
eecs.qmul.ac.uk
11
Using clever computer vision techniques its now possible for your ingredients to tell you how they should be cooked in a kitchen. The system uses cameras and projectors to first recognise the ingredients on the chopping board, for example the size, shape and species of fish you are using. Then the system projects a cutting line on the fish to show you how to prepare it, and a speech bubble telling you how long it should be cooked for and suggesting ways it can be served. In the future these cooking support systems could take some of the strain from mealtimes. At least it will help to make us all better cooks, and perhaps with an added pinch of artificial intelligence we can all become more like Jamie Oliver.
12
www.cs4fn.org
Ideas in a flash
Not long ago, some computer scientists stood up and spoke out. They said that a particular bit of gadget design had become unimaginative. They were bored with the way things were and they werent going to take it anymore. They gave themselves a mission: to reignite peoples imagination about this particular bit of design. What were they talking about? A feature that almost every electronic device shares, which we take completely for granted and accept is small and kind of dull. The researchers, from Carnegie Mellon and Michigan State universities in the US, want to start a revolution in those little lights that tell us whether something is on or off.
Not saying very much Those lights (called LEDs for light emitting diodes) started appearing on gadgets in the 1970s, when they first became cheap enough for manufacturers to stick on almost everything they made. Fast forward to the 21st century, and theyre everywhere. LEDs are on computers, televisions, smoke alarms, speakers, even toothbrushes. But what do they really tell us? Not a whole lot, when you think about it. They can be on, which usually means that the appliance is on, or off, which means off. But then, sometimes a light thats on means the gadget is standing by. Your TV probably has an LED thats on when the television itself is off. Thats one problem with the little lights they can mean different things on different gadgets. Another problem is that its not always clear what the lights mean anyway. The researchers found a toaster that had a blinking LED on it. What is that supposed to mean? That the toast is done? Or that its still cooking? Or that the toaster is jammed? Who knows. On, off and in between To start to get the creative wheels turning, the researchers brought in a team of designers for a brainstorming session. How many different things could they think of for a single, solid-colour light to do? A lot, it turns out. They came up with 24 ways for a light to be on or off, ranging from a heartbeat-like pulse to a gentle flickering like a candle. The next step was to try and connect those behaviours to something that your gadget might like to tell you. Things like incoming call, booting up, or unable to connect. Next came an experiment. The computer scientists randomly partnered up combinations of the flickery light behaviours and the possible messages. Then they got hundreds of testers to look at the pairings and rate how appropriate they were. Would a heartbeat go better with youve received a message or Im thinking? It was up to the testers to say. Which light is right? The results showed that certain types of behaviour connected better with certain messages. For the messages that were notifications like incoming call or scheduled event, bright flashes seemed to make sense. On the other hand, slow pulses were better to get a low-energy message across. For turning on, a light that got brighter in steps, like a staircase, was best. You can sort of see the comparisons that are at work in each of these messages the slowly building light is kind of like the machine activating all the parts within itself. One problem the experiment found was that there wasnt a clear way to get across messages about being unable to do something. There was no agreement among the testers about whether any of the light behaviours looked right. Maybe they just didnt find the right comparison this time. There will be a next time. After all, the researchers want to start a revolution. They want our little lights to say more, and say it with more imagination. Others will try again, and build a more expressive language of tiny little lights. And just wait until they start changing colours.
cs4fn
eecs.qmul.ac.uk
13
14
www.cs4fn.org
Where d you go? The next step for the system was to see if it could recognise which room someone was standing in when they performed the movements. There were now eight locations to keep straight two locations in one large room and six more scattered throughout the house. It was up to the system to learn the electrical signature for each room, as well as the signature for each movement. Thats pretty tough work. But it worked well really well. The system was able to guess the room almost 100% of the time. Whats more, they found that the location tracking even worked on the data from the first experiment, when they were only supposed to be looking at movements. But the electrical signatures of each room were built into that data too, and the system was expert enough to pick them out.
Putting it all together In the future the researchers are hoping that their gadgets will become small enough to carry around with you wherever you are in a building. This could allow you to control computers within your house, or switch things on and off just by making certain movements. The fact that the system can sense your location might mean that you could use the same gestures to do different things. Maybe in the living room a punch would turn on the television, but in the kitchen it would start the microwave. Whatever the case, its a great way to use the invisible flow of energy all around us.
The electrical sensors would really have to work hard to make sense through the interference.
cs4fn
eecs.qmul.ac.uk
15
16
www.cs4fn.org
cs4fn
eecs.qmul.ac.uk
17
Contagious emotion
When you express yourself on Facebook, do you ever think that it can change what your friends write? You might not be writing for anyone in particular. You might even feel like youre just shouting into the wind. But the science of interaction shows the things you write have a small effect. Posts you write about your own emotions could end up being reflected in the emotions others write about. Research has found that your own Facebook status updates could affect your friends' posts, even days later.
The power of words? Psychologists have studied how people transmit emotions to one another, almost like viruses. Its useful to be able to share emotions; it helps us bond with someone else. But most research so far has been on people sharing emotions in person. Some psychologists even think that most emotions are shared through physical cues rather than words. But now that we communicate more in text, research is beginning to show its possible to affect someone elses emotions even if youre not with them. Adam Kramer, a researcher at Facebook, wanted to see whether emotional status updates led to friends posting about similar emotions. No need to be nice There is an interesting advantage to studying Facebook status updates. They are written for a large group rather than to any particular person, which means any shared feelings are more likely to be true. If someone sent you an emotional text, it would be rude not to respond, and sometimes you might pretend to share a certain emotion just to be nice. No one in particular needs to reply to a status update, so if you dont share the emotion you dont have to say anything similar. If Facebook statuses seemed to mimic emotion, its a clue that it really is possible for your posts to influence the emotions that others post about. Collecting feelings Adam started by selecting about 60,000 people on Facebook a group of users and all their friends who had posted something three days in a row. Then he gathered all the status updates they wrote over those three days. He analysed all the words they used with the help of a special dictionary for emotion research, which rates words according to whether they have positive or negative meanings. His prediction was that he would see users and their friends using words associated with the same sort of emotion. He did. Not only were users likely to post about similar emotions on the same day, the effect seemed to stick around for at least a couple of days afterward. Imagine you post about being happy. What Adam found was that if he compared your friends posts tomorrow to your posts today, they would tend to be happy posts too. Your friends posts would even show similar emotions to your original posts after two days. They even found that if one person posted about a certain emotion, it could stop others from posting an opposite emotion. For example, when people expressed positive emotions in their status updates, their friends held back from posting about negative emotions. Y oure influential So it turns out that the words you use on Facebook influence your friends choice of words too. Adam points out that you may not be affecting your friends themselves. Others will have to do more research about whether your Facebook posts can directly give other people the same emotions as you. But just by posting on Facebook, you change what your friends post about. Thats not to say that you have to follow the crowd. In fact, its probably better that you say what youre thinking, without worrying whether others are saying it too. Your friends can feel good for you when things are going well, and they can help if things are troubling you. But the next time you write a post, its interesting to think that you might be helping in a small way to bring thousands of other posts into existence too.
18
www.cs4fn.org
cs4fn
eecs.qmul.ac.uk
19
www.cs4fn.org