MIT MediaLabProjects
MIT MediaLabProjects
October 2012
communications@media.mit.edu
http://www.media.mit.edu
Many of the MIT Media Lab research projects described in the following pages are conducted under the auspices of
sponsor-supported, interdisciplinary Media Lab centers, consortia, joint research programs, and initiatives. They are:
CE 2.0
Most of us are awash in consumer electronics (CE) devices: from cell phones, to TVs, to dishwashers. They provide us with
information, entertainment, and communications, and assist us in accomplishing our daily tasks. Unfortunately, most are not
as helpful as they could and should be; for the most part, they are dumb, unaware of us or our situations, and often difficult
to use. In addition, most CE devices cannot communicate with our other devices, even when such communication and
collaboration would be of great help. The Consumer Electronics 2.0 initiative (CE 2.0) is a collaboration between the Media
Lab and its sponsor companies to formulate the principles for a new generation of consumer electronics that are highly
connected, seamlessly interoperable, situation-aware, and radically simpler to use. Our goal is to show that as computing
and communication capability seep into more of our everyday devices, these devices do not have to become more confusing
and complex, but rather can become more intelligent in a cooperative and user-friendly way.
The most current information about our research is available on the MIT Media Lab Web site, at
http://www.media.mit.edu/research/.
Digital Life
Digital Life consortium activities engage virtually the entire faculty of the Media Lab around the theme of "open innovation."
Researchers divide the topic into three areas: open communications, open knowledge, and open everything. The first
explores the design and scalability of agile, grassroots communications systems that incorporate a growing understanding of
emergent social behaviors in a digital world; the second considers a cognitive architecture that can support many features of
"human intelligent thinking" and its expressive and economic use; and the third extends the idea of inclusive design to
immersive, affective, and biological interfaces and actions.
16. Direct Engineering and Testing of Novel Therapeutic Platforms for Treatment of Brain Disorders ............................... 4
17. Exploratory Technologies for Understanding Neural Circuits ........................................................................................ 4
18. Hardware and Systems for Control of Neural Circuits with Light ................................................................................... 4
19. Molecular Reagents Enabling Control of Neurons and Biological Functions with Light ................................................. 5
20. Recording and Data-Analysis Technologies for Observing and Analyzing Neural Circuit Dynamics ............................ 5
21. Understanding Neural Circuit Computations and Finding New TherapeuticTargets ..................................................... 5
377. BlitzScribe: Speech Analysis for the Human Speechome Project ............................................................................... 85
378. Crowdsourcing the Creation of Smart Role-Playing Agents ........................................................................................ 85
379. HouseFly: Immersive Video Browsing and Data Visualization .................................................................................... 85
380. Human Speechome Project ......................................................................................................................................... 85
381. Speech Interaction Analysis for the Human Speechome Project ................................................................................ 85
382. Speechome Recorder for the Study of Child Development Disorders ......................................................................... 86
3. Consumer V. Michael Bove Jr., James D. Barabas, Sundeep Jolly and Daniel E. Smalley
Holo-Video
The goal of this project, building upon work begun by Stephen Benton and the
Spatial Imaging group, is to create an inexpensive desktop monitor for a PC or
game console that displays holographic video images in real time, suitable for
entertainment, engineering, or medical imaging. To date, we have demonstrated the
fast rendering of holo-video images (including stereographic images that unlike
ordinary stereograms have focusing consistent with depth information) from
OpenGL databases on off-the-shelf PC graphics cards; current research addresses
new optoelectronic architectures to reduce the size and manufacturing cost of the
display system.
4. Direct Fringe Writing V. Michael Bove Jr., Sundeep Jolly and University of Arizona College of
of Optical Sciences
Computer-Generated
Photorefractive polymer has many attractive properties for dynamic holographic
Holograms displays; however, the current display systems based around its use involve
generating holograms by optical interference methods that complicate the optical
NEW LISTING and computational architectures of the systems and limit the kinds of holograms that
can be displayed. We are developing a system to write computer-generated
diffraction fringes directly from spatial light modulators to photorefractive polymers,
resulting in displays with reduced footprint and cost, and potentially higher
perceptual quality.
5. Everything Tells a V. Michael Bove Jr., David Cranor and Edwina Portocarrero
Story
Following upon work begun in the Graspables project, we are exploring what
happens when a wide range of everyday consumer products can sense, interpret
into human terms (using pattern recognition methods), and retain memories, such
that users can construct a narrative with the aid of the recollections of the "diaries"
of their sporting equipment, luggage, furniture, toys, and other items with which they
interact.
8. Narratarium V. Michael Bove Jr., Catherine Havasi, Katherine (Kasia) Hayden, Daniel Novy,
Jie Qi and Robert H. Speer
NEW LISTING
Remember telling scary stories in the dark with flashlights? Narratarium is an
immersive storytelling environment to augment creative play using texture, color,
and image. We are using natural language processing to listen to and understand
stories being told, and thematically augment the environment using color and
images. As a child tells stories about a jungle, the room is filled with greens and
browns and foliage comes into view. A traveling parent can tell a story to a child and
fill to room with images, color, and presence.
12. Slam Force Net V. Michael Bove Jr., Santiago Alfaro and Daniel Novy
Adding augmented reality to the living room TV, we are exploring the technical and
creative implications of using a mobile phone or tablet (and possibly also dedicated
devices like toys) as a controllable "second screen" for enhancing television
viewing. Thus, a viewer could use the phone to look beyond the edges of the
television to see the audience for a studio-based program, to pan around a sporting
event, to take snapshots for a scavenger hunt, or to simulate binoculars to zoom in
on a part of the scene. Recent developments include the creation of a mobile device
app for Apple products and user studies involving several genres of broadcast
television programming.
14. The "Bar of Soap": V. Michael Bove Jr. and Brandon Taylor
Grasp-Based
We have built several handheld devices that combine grasp and orientation sensing
Interfaces
with pattern recognition in order to provide highly intelligent user interfaces. The Bar
of Soap is a handheld device that senses the pattern of touch and orientation when
it is held, and reconfigures to become one of a variety of devices, such as phone,
camera, remote control, PDA, or game machine. Pattern-recognition techniques
allow the device to infer the user's intention based on grasp. Another example is a
baseball that determines a user's pitching style as an input to a video game.
16. Direct Engineering Gilberto Abram, Leah Acker, Zack Anderson, Nir Grossman, Xue Han, Mike
and Testing of Novel Henninger, Margaret Kim, Ekavali Mishra, Fumi Yoshida
Therapeutic
New technologies for controlling neural circuit dynamics, or entering information into
Platforms for the nervous system, may be capable of serving in therapeutic roles for improving
Treatment of Brain the health of human patients–enabling the restoration of lost senses, the control of
Disorders aberrant or pathological neural dynamics, and the augmentation of neural circuit
computation, through prosthetic means. We are assessing the translational
possibilities opened up by our technologies, exploring the safety and efficacy of
NEW LISTING
optogenetic neuromodulation in multiple animal models, and also pursuing, both in
our group and in collaborations with others, proofs-of-principle of new kinds of
optical neural control prosthetic. By combining observation of brain activity with
real-time analysis and responsive optical neurostimulation, new kinds of "brain
co-processor" may be possible which can work efficaciously with the brain to
augment its computational abilities, e.g., in the context of cognitive, emotional,
sensory, or motor disability.
17. Exploratory Brian Allen, Rachel Bandler, Steve Bates, Fei Chen, Jonathan Gootenberg,
Technologies for Suhasa Kodandaramaiah, Daniel Martin-Alarcon, Paul Tillberg, Aimei Yang
Understanding Neural
We are continually exploring new strategies for understanding neural circuits, often
Circuits in collaboration with other scientific, engineering, and biology research groups. If
you would like to collaborate on such a project, please contact us.
NEW LISTING
18. Hardware and Claire Ahn, Brian Allen, Michael Baratta, Jake Bernstein, Stephanie Chan,
Systems for Control Brian Chow, August Dietrich, Nir Grossman, Alexander Guerra, Mike
Henninger, Emily Ko, Alex Rodriguez, Jorg Scholvin, Giovanni Talei Franzesi,
of Neural Circuits
Ash Turza, Christian Wentz, Anthony Zo
with Light
The brain is a densely wired, heterogeneous circuit made out of thousands of
different kinds of cell. Over the last several years we have developed a set of
"optogenetic" reagents, fully genetically encoded reagents that, when targeted to
specific cells, enable their physiology to be controlled via light. To confront the 3D
complexity of the living brain, enabling the analysis of the circuits that causally drive
or support specific neural computations and behaviors, our lab and our
collaborators have developed hardware for delivery of light into the brain, enabling
control of complexly shaped neural circuits, as well as the ability to combinatorially
activate and silence neural activity in distributed neural circuits. We anticipate that
these tools will enable the systematic analysis of the brain circuits that
mechanistically and causally contribute to specific behaviors and pathologies.
19. Molecular Reagents Fei Chen, Yongku Cho, Brian Chow, Amy Chuong, Allison Dobry, Xue Han,
Enabling Control of Nathan Klapoetke, Albert Kwon, Mingjie Li, Daniel Martin-Alarcon, Tania
Morimoto, Xiaofeng Qian, Daniel Schmidt, Aimei Yang
Neurons and
Biological Functions Over the last several years our lab and our collaborators have pioneered a new
with Light area–the development of a number of fully genetically encoded reagents that, when
targeted to specific cells, enable their physiology to be controlled via light. These
reagents, known as optogenetic tools, enable temporally precise control of neural
electrical activity, cellular signaling, and other high-speed natural as well as
synthetic biology processes and pathways using light. Such tools are now in
widespread use in neuroscience, for the study of the neuron types and activity
20. Recording and Brian Allen, Scott Arfin, Jake Bernstein, Brian Chow, Mike Henninger, Justin
Data-Analysis Kinney, Suhasa Kodandaramaiah, Caroline Moore-Kochlacs, Nikita Pak, Jorg
Scholvin, Annabelle Singer, Al Strelzoff, Giovanni Talei Franzesi, Ash Turza,
Technologies for
Christian Wentz, Ian Wicker
Observing and
Analyzing Neural The brain is a 3D, densely wired circuit that computes via large sets of widely
Circuit Dynamics distributed neurons interacting at fast timescales. To understand the brain, ideally it
would be possible to observe the activity of many neurons with as great a degree of
precision as possible, so as to understand the neural codes and dynamics that are
NEW LISTING
produced by the circuits of the brain. With collaborators, our lab is developing
innovations to enable such analyses of neural circuit dynamics. Such neural
observation strategies may also serve as detailed biomarkers of brain disorders, or
indicators of potential drug side effects. We have also developed robotic methods
for automated intracellular recording of neurons in the living brain, which uniquely
enables the characterizing of synaptic and ion channel influences on neural
computation with single-cell resolution. Such technologies may, in conjunction with
optogenetics, enable closed-loop neural control technologies, which can introduce
information into the brain as a function of brain state ("brain co-processors"),
enabling new kinds of circuit characterization tools as well as new kinds of
advanced brain-repair prosthetics.
21. Understanding Neural Carissa Jansen, Leah Acker, Brian Allen, Michael Baratta, Steve Bates, Sean
Circuit Computations Batir, Jake Bernstein, Tim Buschman, Huayu Ding, Stephen Eltinge, Xue Han,
Kyungman Kim, Suhasa Kodandaramaiah, Pei-Ann Lin, Carolina
and Finding New
Lopez-Trevino, Patrick Monahan, Caroline Moor
TherapeuticTargets
We are using our tools–such as optogenetic neural control and brain circuit
NEW LISTING dynamics measurement–both within our lab and in collaborations with others, to
analyze how specific sets of circuit elements within neural circuits give rise to
behaviors and functions such as cognition, emotion, movement, and sensation. We
are also determining which neural circuit elements can initiate or sustain
pathological brain states. Principles of controlling brain circuits may yield
fundamental insights into how best to go about treating brain disorders. Finally, we
are screening for neural circuit targets that, when altered, present potential
therapeutic benefits, and which may serve as potential drug targets or electrical
stimulation targets. In this way we hope to explore systematic, causal, temporally
precise analyses of how neural circuits function, yielding both fundamental scientific
insights and important clinically relevant principles.
23. Cloud-HRI Cynthia Breazeal, Nicholas DePalma, Adam Setapen and Sonia Chernova
Imagine opening your eyes and being awake for only a half an hour at a time. This
NEW LISTING
is the life that robots traditionally live. This is due to a number of factors such as
battery life and wear on prototype joints. Roboticists have typically muddled though
this challenge by crafting handmade models of the world or performing machine
learning with synthetic data–and sometimes real-world data. While robotics
researchers have traditionally used large distributed systems to do perception,
planning, and learning, cloud-based robotics aims to link all of a robot's
experiences. This movement aims to build large-scale machine learning algorithms
that use experience from large groups of people, whether sourced from a large
number of tabletop robots or a large number of experiences with virtual agents.
Large-scale robotics aims to change embodied AI as it changed non-embodied AI.
24. DragonBot: Android Adam Setapen, Natalie Freed, and Cynthia Breazeal
phone robots for
DragonBot is a new platform built to support long-term interactions between children
long-term HRI
and robots. The robot runs entirely on an Android cell phone, which displays an
animated virtual face. Additionally, the phone provides sensory input (camera and
NEW LISTING microphone) and fully controls the actuation of the robot (motors and speakers).
Most importantly, the phone always has an Internet connection, so a robot can
harness cloud-computing paradigms to learn from the collective interactions of
multiple robots. To support long-term interactions, DragonBot is a "blended-reality"
character–if you remove the phone from the robot, a virtual avatar appears on the
screen and the user can still interact with the virtual character on the go. Costing
less than $1,000, DragonBot was specifically designed to be a low-cost platform
that can support longitudinal human-robot interactions "in the wild."
25. Huggable: A Robotic Cynthia Breazeal, Walter Dan Stiehl, Robert Toscano, Jun Ki Lee, Heather
Companion for Knight, Sigurdur Orn Adalgeirsson, Jeff Lieberman and Jesse Gray
Long-Term Health
The Huggable is a new type of robotic companion for health care, education, and
Care, Education, and social communication applications. The Huggable is much more than a fun,
Communication interactive robotic companion; it functions as an essential team member of a triadic
interaction. Therefore, the Huggable is not meant to replace any particular person in
a social network, but rather to enhance it. The Huggable is being designed with a
full-body sensitive skin with over 1500 sensors, quiet back-drivable actuators, video
cameras in the eyes, microphones in the ears, an inertial measurement unit, a
speaker, and an embedded PC with 802.11g wireless networking. An important
design goal for the Huggable is to make the technology invisible to the user. You
should not think of the Huggable as a robot but rather as a richly interactive teddy
bear.
27. MDS: Exploring the Cynthia Breazeal, Sigurdur Orn Adalgeirsson, Nicholas Brian DePalma, Jin
Dynamics of Joo Lee, Philipp Robbel; Alborz Geramifard, Jon How, Julie Shah (CSAIL);
Malte Jung and Pamela Hinds (Stanford)
Human-Robot
Collaboration As robots become more and more capable, we will begin to invite them into our
daily lives. There have been few examples of mobile robots able to carry out
NEW LISTING everyday tasks alongside humans. Though research on this topic is becoming more
and more prevalent, we are just now beginning to understand what it means to
collaborate. This project aims to unravel the dynamics involved in taking on
leadership roles in collaborative tasks as well as balancing and maintaining the
expectations of each member of the group (whether it be robot or human). This
challenge involves aspects of inferring internal human state, role support and
planning, as well as optimizing and keeping synchrony amongst team members
"tight" in their collaboration.
29. Socially Assistive Tufts University, University of Southern California, Cynthia Breazeal,
Robotics: An NSF Jacqueline Marie Kory, Jin Joo Lee, David Robert, Edith Ackermann,
Catherine Havasi, Kasia Hayden with Stanford University, Sooyeon Jeong,
Expedition in
Willow Garage and Yale University
Computing
Our mission is to develop the computational techniques that will enable the design,
NEW LISTING implementation, and evaluation of "relational" robots, to encourage the social,
emotional, and cognitive growth in children, including those with social or cognitive
deficits. Funding for the project comes from the NSF Expeditions in Computing
program. This Expedition has the potential to substantially impact the effectiveness
of education and healthcare, and to enhance the lives of children and other groups
that require specialized support and intervention. In particular, the MIT effort is
focusing on developing second language learning companions for pre-school aged
children, ultimately for ESL (English as a Second Language).
31. TinkRBook: Cynthia Breazeal, Angela Chang and David Scott Nunez
Reinventing the
TinkRBook is a storytelling system that introduces a new concept of reading, called
Reading Primer
textual tinkerability. Textual tinkerability uses storytelling gestures to expose the
text-concept relationships within a scene. Tinkerability prompts readers to become
more physically active and expressive as they explore concepts in reading together.
TinkRBooks are interactive storybooks that prompt interactivity in a subtle way,
enhancing communication between parents and children during shared picture-book
reading. TinkRBooks encourage positive reading behaviors in emergent literacy:
parents act out the story to control the words on-screen, demonstrating print
referencing and dialogic questioning techniques. Young children actively explore the
abstract relationship between printed words and their meanings, even before this
relationship is properly understood. By making story elements alterable within a
narrative, readers can learn to read by playing with how word choices impact the
storytelling experience. Recently, this research has been applied to developing
countries.
Codeable Objects is a library for Processing that allows people to design and build
NEW LISTING
objects using geometry and programing. Geometric computation offers a host of
powerful design techniques, but its use is limited to individuals with a significant
amount of programming experience or access to complex design software. In
contrast, Codeable objects allows a range of people, including novice coders,
designers and artists to rapidly design, customize and construct an artifact using
geometric computation and digital fabrication. The programming methods provided
by the library allow the user to program a wide range of structures and designs with
simple code and geometry. When the user compiles their code, the software
outputs tool paths based on their specifications, which can be used in conjunction
with digital fabrication tools to build their object.
37. Exploring Artisanal Leah Buechley, Sam Jacoby and David A. Mellis
Technology
We are exploring the methods by which traditional artisans construct new electronic
technologies using contextually novel materials and processes, incorporating wood,
NEW LISTING
textiles, reclaimed and recycled products, as well as conventional circuitry. Such
artisanal technologies often address different needs, and are radically different in
form and function than conventionally designed and produced products.
The LilyPad Arduino is a set of tools that empowers people to build soft, flexible,
fabric-based computers. A set of sewable electronic modules enables users to
blend textile craft, electrical engineering, and programming in surprising, beautiful,
and novel ways. A series of workshops that employed the LilyPad have
demonstrated that tools such as these, which introduce engineering from new
perspectives, are capable of involving unusual and diverse groups in technology
development. Ongoing research will explore how the LilyPad and similar devices
can engage under-represented groups in engineering, change popular assumptions
about the look and feel of technology, and spark hybrid communities that combine
rich crafting traditions with high-tech materials and processes.
40. Microcontrollers as Leah Buechley, Sam Jacoby, David A. Mellis, Hannah Perner-Wilson and Jie
Material Qi
We’ve developed a set of tools and techniques that make it easy to use
NEW LISTING
microcontrollers as an art or craft material, embedding them directly into drawings
or other artifacts. We use the ATtiny45 from Atmel, a small and cheap (~$1)
microcontroller that can be glued directly to paper or other objects. We then
construct circuits using conductive silver ink, dispensed from squeeze bottles with
needle tips. This makes it possible to draw a circuit, adding lights, speakers, and
other electronic components.
44. CharmMe Catherine Havasi, Brett Samuel Lazarus and Victor J Wang
CharmMe is a mobile social discovery application that helps people meet each
NEW LISTING
other during events. The application blends physical and digital proximity to help
you connect with with other like-minded individuals. Armed with RFID sensors and a
model of how the Lab works, CharmMe determines who you should talk to using
information including checking in to conference talks or “liking” projects using QR
codes. In addition, possible opening topics of conversation are suggested based on
users' expressed similar interests.
45. ConceptNet Catherine Havasi, Robert Speer, Henry Lieberman and Marvin Minsky
Alumni Contributors: Jason Alonso, Kenneth C. Arnold, Ian Eslick, Xinyu H. Liu and
Push Singh
How can a knowledge base learn from the Internet, when you shouldn't trust
NEW LISTING
everything you read on the Internet? CORONA is a system for building a knowledge
base from a combination of reliable and unreliable sources, including
crowd-sourced contributions, expert knowledge, Games with a Purpose, automatic
machine reading, and even knowledge that is imperfectly derived from other
knowledge in the system. It confirms knowledge as reliable as more sources
confirm it or unreliable when sources disagree, and then by running the system in
reverse it can discover which knowledge sources are the most trustworthy.
47. Divisi: Reasoning Robert Speer, Catherine Havasi, Kenneth Arnold, and Jason Alonso
Over Semantic
We have developed technology that enables easy analysis of semantic data,
Relationships
blended in various ways with common-sense world knowledge. The results support
reasoning by analogy and association. A packaged library of code is being made
available to all sponsors.
48. Narratarium V. Michael Bove Jr., Catherine Havasi, Katherine (Kasia) Hayden, Daniel Novy,
Jie Qi and Robert H. Speer
NEW LISTING
Remember telling scary stories in the dark with flashlights? Narratarium is an
immersive storytelling environment to augment creative play using texture, color,
and image. We are using natural language processing to listen to and understand
49. Open Mind Common Michael Luis Puncel, Karen Anne Sittig and Robert H. Speer
Sense
The biggest problem facing artificial intelligence today is how to teach computers
enough about the everyday world so that they can reason about it like we do—so
that they can develop "common sense." We think this problem may be solved by
harnessing the knowledge of people on the Internet, and we have built a Web site to
make it easy and fun for people to work together to give computers the millions of
little pieces of ordinary knowledge that constitute "common sense." Teaching
computers how to describe and reason about the world will give us exactly the
technology we need to take the Internet to the next level, from a giant repository of
Web pages to a new state where it can think about all the knowledge it contains; in
essence, to make it a living entity.
50. Red Fish, Blue Fish Robert Speer and Catherine Havasi
With commonsense computing, we can discover trends in the topics that people are
talking about right now. Red Fish Blue Fish takes input in real time from lots of
political blogs, and creates a visualization of what topics are being discussed by the
left and the right.
The Analogy Space project, which is built upon ConceptNet has the ability to
NEW LISTING
identify similar concepts by building vectors out of them in a multi-dimensional
space. Story Space will apply this technique to human narrative in order to provide
a measure of similarity between different stories.
53. The Glass Henry Holtzman, Andy Lippman, Matthew Blackshaw, Jon Ferguson,
Infrastructure Catherine Havasi, Julia Ma, Daniel Schultz and Polychronis Ypodimatopoulos
This project builds a social, place-based information window into the Media Lab
using 30 touch-sensitive screens strategically placed throughout the physical
complex and at sponsor sites. The idea is get people to talk among themselves
about the work that they jointly explore in a public place. We present Lab projects
as dynamically connected sets of "charms" that visitors can save, trade, and
explore. The GI demonstrates a framework for an open, integrated IT system and
shows new uses for it.
Alumni Contributors: Rick Borovoy, Greg Elliott and Boris Grigory Kizelshteyn
Hugh Herr—Biomechatronics
How technology can be used to enhance human physical capability.
59. FitSocket: A Better Hugh Herr, Neri Oxman, Arthur Petron and Roy Kornbluh (SRI)
Way to Make Sockets
Sockets–the cup-shaped devices that attach an amputated limb to a lower-limb
prosthesis–are made through unscientific, artisanal methods that do not have
repeatable quality and comfort from one amputee to the next. The FitSocket project
aims to identify the correlation between leg tissue properties and the design of a
comfortable socket. We accomplish this by creating a programmable socket called
the FitSocket which can iterate over hundreds of socket designs in minutes instead
of months.
65. Cultural Exports Shahar Ronen, Amy (Zhao) Yu and César A. Hidalgo
Cultural Exports introduces a new approach for studying both connections between
NEW LISTING
countries and the cultural impact of countries. Consider a native of a certain country
who becomes famous in other countries–this person is in a sense a "cultural export"
of his home country "imported" to other countries. For example, the popularity of
Dominican baseball player Manny Ramirez in the USA and Korea makes him a
cultural export of the Dominican Republic. Using Wikipedia biographies and
Immersion is a visual data experiment that delivers a fresh perspective of your email
NEW LISTING
inbox. Focusing on a people-centric approach rather than the content of the emails,
Immersion brings into view an important personal insight–the network of people you
are connected to via email, and how it evolves over the course of many years.
Given that this experiment deals with data that is extremely private, it is worthwhile
to note that when given secure access to your Gmail inbox (which you can revoke
anytime), Immersion only uses data from email headers and not a single word of
any email's subject or body content.
67. Place Pulse Phil Salesses, Anthony DeVincenzi and César A. Hidalgo
Place Pulse is a website that allows anybody to quickly run a crowdsourced study
and interactively visualize the results. It works by taking a complex question, such
as “Which place in Boston looks the safest?” and breaking it down into easier to
answer binary pairs. Internet participants are given two images and asked "Which
place looks safer?" From the responses, directed graphs are generated and can be
mined, allowing the experimenter to identify interesting patterns in the data and form
new hypothesis based on their observations. It works with any city or question and
is highly scalable. With an increased understanding of human perception, it should
be possible for calculated policy decisions to have a disproportionate impact on
public opinion.
68. The Economic Alex Simoes, Dany Bahar, Ricardo Hausmann and César A. Hidalgo
Complexity
With more than six billion people and 15 billion products, the world economy is
Observatory
anything but simple. The Economic Complexity Observatory is an online tool that
helps people explore this complexity by providing tools that can allow decision
makers to understand the connections that exist between countries and the myriad
of products they produce and/or export. The Economic Complexity Observatory
puts at everyone’s fingertips the latest analytical tools developed to visualize and
quantify the productive structure of countries and their evolution.
69. The Language Group Shahar Ronen, Kevin Hu, Michael Xu, and César A. Hidalgo
Network
Most interactions between cultures require overcoming a language barrier, which is
why multilingual speakers play an important role in facilitating such interactions. In
NEW LISTING
addition, certain languages–not necessarily the most spoken ones–are more likely
than others to serve as intermediary languages. We present the Language Group
Network, a new approach for studying global networks using data generated by tens
of millions of speakers from all over the world: a billion tweets, Wikipedia edits in all
languages, and translations of two million printed books. Our network spans over
eighty languages, and can be used to identify the most connected languages and
the potential paths through which information diffuses from one culture to another.
Applications include promotion of cultural interactions, prediction of trends, and
marketing.
The 8D Display combines a glasses-free 3D display (4D light field output) with a
NEW LISTING
relightable display (4D light field input). The ultimate effect of this extension to our
earlier BiDi Screen project will be a display capable of showing physically realistic
objects that respond to scene lighting as we would expect. Imagine a shiny virtual
teapot in which you see your own reflection, a 3D model that can be lighted with a
real flashlight to expose small surface features, or a virtual flashlight that illuminates
real objects in front of the display. As the 8D Display captures light field input,
gestural interaction as seen in the BiDi Screen project is also possible.
71. Air Mobs Andy Lippman, Henry Holtzman and Eyal Toledano
72. Brin.gy: What Brings Henry Holtzman, Andy Lippman and Polychronis Ypodimatopoulos
Us Together
We allow people to form dynamic groups focused on topics that emerge
serendipitously during everyday life. They can be long-lived or flower for a short
NEW LISTING
time. Examples include people interested in buying the same product, those with
similar expertise, those in the same location, or any collection of such attributes. We
call this the Human Discovery Protocol (HDP). Similar to how computers follow
well-established protocols like DNS in order to find other computers that carry
desired information, HDP presents an open protocol for people to announce bits of
information about themselves, and have them aggregated and returned back in the
form of a group of people that match against the user’s specified criteria. We
experiment with a web-based implementation (brin.gy) that allows users to join and
communicate with groups of people based on their location, profile information, and
items they may want to buy or sell.
Collaborating and media creation are difficult tasks, both for people and for network
NEW LISTING
architectures. CoCam is a self-organizing network for real-time camera image
collaboration. Like all camera apps, just point and shoot; CoCam then automatically
joins other media creators into a network of collaborators. Network discovery,
creation, grouping, joining, and leaving is done automatically in the background,
letting users focus on participation in an event. We use local P2P middleware and a
3G negotiation service to create these networks for real-time media sharing.
CoCam also provides multiple views that make the media experience more
exciting–such as appearing to be in multiple places at the same time. The media is
immediately distributed and replicated in multiple peers, thus if a camera phone is
confiscated other users have copies of the images.
74. ContextController Robert Hemsley, Arlene Ducao, Eyal Toledano and Henry Holtzman
CoSync builds the ability to create and act jointly into mobile devices . This mirrors
NEW LISTING
the way we as a society act both individually and in concert. CoSync device ecology
combines multiple stand-alone devices and controls them opportunistically as if they
are one distributed, or diffuse, device at the user’s fingertips. CoSync includes a
programming interface that allows time synchronized coordination at a granularity
that will permit watching a movie on one device and hearing the sound from
another. The open API encourages an ever growing set of such finely coordinated
applications.
Flow is an augmented interaction project that bridges the divide between our non
NEW LISTING
digital objects and items and our ecosystem of connected devices. By using
computer vision Flow enables our traditional interactions to be augmented with
digital meaning allowing an event in one environment to flow into the next. Through
this physical actions such as tearing a document can have a mirrored effect and
meaning in our digital environment leading to actions such as the deletion of the
associated digital file. This project is part of an initial exploration that focuses on
creating an augmented interaction overlay for our environment enabling users to
redefine their physical actions.
79. MobileP2P Yosuke Bando, Eyal Toledano, Robert Hemsley, Mary Linnell, Dan Sawada
and Henry Holtzman
NEW LISTING
MobileP2P aims to magically populate mobile devices with popular video clips and
app updates without using people's data plans by opportunistically connecting
nearby devices together when they are in range of each other.
80. NewsJack Sasha Costanza-Chock, Henry Holtzman, Ethan Zuckerman and Daniel E.
Schultz
NEW LISTING
NewsJack is a media remixing tool built from Mozilla's Hackasaurus. It allows users
to modify the front pages of news sites, changing language and headlines to
change the news into what they wish it could be.
81. NeXtream: Social Henry Holtzman, ReeD Martin and Mike Shafran
Television
Functionally, television content delivery has remained largely unchanged since the
introduction of television networks. NeXtream explores an experience where the
role of the corporate network is replaced by a social network. User interests,
communities, and peers are leveraged to determine television content, combining
sequences of short videos to create a set of channels customized to each user. This
project creates an interface to explore television socially, connecting a user with a
community through content, with varying levels of interactivity: from passively
consuming a series, to actively crafting one's own television and social experience.
82. OpenIR Aziz Alghunaim, Ilias Koen, Henry Holtzman, Arlene Brigoli Ducao, Juhee Bae
and Stephanie New
NEW LISTING
When an environmental crisis strikes, the most important element to saving lives is
information. Information regarding water depths, spread of oil, fault lines, burn
scars, and elevation are all crucial in the face of disaster. Much of this information is
publicly available as infrared satellite data. However, with today’s technology, this
data is difficult to obtain, and even more difficult to interpret. Open Infrared, or
OpenIR, is an ICT (information communication technology) offering geo-located
infrared satellite data as on-demand map layers and translating the data so that
anyone can understand it easily. OpenIR will be pilot tested in Indonesia, where
ecological and economic vulnerability is apparent from frequent seismic activity and
limited supporting infrastructure. The OpenIR team will explore how increased
accessibility to environmental information can help infrastructure-challenged regions
to deal with environmental crises of many kinds.
83. Proverbial Wallets Henry Holtzman, John Kestner, Daniel Leithinger, Danny Bankman, Emily Tow
and Jaekyung Jung
We have trouble controlling our consumer impulses, and there's a gap between our
decisions and the consequences. When we pull a product off the shelf, do we know
our bank-account balance, or whether we're over budget for the month? Our
existing senses are inadequate to warn us. The Proverbial Wallet fosters a financial
sense at the point of purchase by embodying our electronically tracked assets. We
provide tactile feedback reflecting account balances, spending goals, and
transactions as a visceral aid to responsible decision-making.
Our smartphones take active attention while we use them to navigate streets, find
NEW LISTING
restaurants, meet friends, and remind us of tasks. SuperShoes allows us to access
this information in a physical ambient form through a foot interface. SuperShoes
takes us to our destination; senses interesting people, places, and events in our
proximity; and notifies us about tasks, all while we immerse ourselves in the
environment. We explore a physical language of interaction afforded by the foot
through various tactile senses. By weaving digital bits into the shoes, SuperShoes
liberates information from the confines of screens and onto the body.
87. The Glass Henry Holtzman, Andy Lippman, Matthew Blackshaw, Jon Ferguson,
Infrastructure Catherine Havasi, Julia Ma, Daniel Schultz and Polychronis Ypodimatopoulos
This project builds a social, place-based information window into the Media Lab
using 30 touch-sensitive screens strategically placed throughout the physical
complex and at sponsor sites. The idea is get people to talk among themselves
about the work that they jointly explore in a public place. We present Lab projects
as dynamically connected sets of "charms" that visitors can save, trade, and
explore. The GI demonstrates a framework for an open, integrated IT system and
shows new uses for it.
Alumni Contributors: Rick Borovoy, Greg Elliott and Boris Grigory Kizelshteyn
Truth Goggles attempts to decrease the polarizing effect of perceived media bias by
NEW LISTING
forcing people to question all sources equally by invoking fact -checking services at
the point of media consumption. Readers will approach even their most trusted
sources with a more critical mentality by viewing content through various "lenses" of
truth.
89. Twitter Weather Henry Holtzman, John Kestner and Stephanie Bian
"Where The Hel" is a pair of helmets: plain and funky. The funky helmet is 3D
NEW LISTING
printed; the plain helmet visualizes proximity to the funky helmet as a function of
signal strength, via an LED light strip. The funky helmet contains an Xbee and a
GPS Radio. Its position is tracked via a web app. The wearer of the plain helmet
can track the funky one via the web app and the LED strip on his helmet. These
helmets are potential iterations towards a more developed HADR (Humanitarian
Assistance and Disaster Relief) helmet system.
91. Ambient Furniture Hiroshi Ishii, David Rose, and Shaun Salzberg
Furniture is the infrastructure for human activity. Every day we open cabinets and
NEW LISTING
drawers, pull up to desks, recline in recliners, and fall into bed. How can technology
augment these everyday rituals in elegant and useful ways? The Ambient Furniture
project mixes apps with the IKEA catalog to make couches more relaxing, tables
more conversational, desks more productive, lamps more enlightening, and beds
more restful. With input from Vitra and Steelcase, we are prototyping a line of
furniture to explore ideas about peripheral awareness (Google Latitude door bell),
incidental gestures (Amazon restocking trash can and the Pandora lounge chair),
pre-attentive processing (energy clock), and eavesdropping interfaces (FaceBook
photo coffee table).
An open publishing platform for visualization, social sharing, and data analysis of
NEW LISTING
geospatial data.
96. Jamming User Hiroshi Ishii, Sean Follmer, Daniel Leithinger, Alex Olwal and Nadia Cheng
Interfaces
Malleable user interfaces have the potential to enable radically new forms of
interactions and expressiveness through ?exible, free-form and computationally
NEW LISTING
controlled shapes and displays. This work, speci?cally focuses on particle jamming
as a simple, effective method for ?exible, shape-changing user interfaces where
programmatic control of material stiffness enables haptic feedback, deformation,
tunable affordances and control gain. We introduce a compact, low-power
pneumatic jamming system suitable for mobile devices, and a new hydraulic-based
technique with fast, silent actuation and optical shape sensing. We enable jamming
structures to sense input and function as interaction devices through two
contributed methods for high-resolution shape sensing using: 1) index-matched
particles and ?uids, and 2) capacitive and electric ?eld sensing. We explore the
design space of malleable and organic user interfaces enabled by jamming through
four motivational prototypes that highlight jamming’s potential in HCI, including
applications for tabletops, tablets and for portable shape-changing mobile devices.
97. Kinected Conference Anthony DeVincenzi, Lining Yao, Hiroshi Ishii and Ramesh Raskar
MirrorFugue is an interface for the piano that bridges the gap of location in music
playing by connecting pianists in a virtual shared space reflected on the piano. Built
on a previous design that only showed the hands, our new prototype displays both
the hands and upper body of the pianist. MirrorFugue may be used for watching a
remote or recorded performance, taking a remote lesson, and remote duet playing.
99. Peddl Andy Lippman, Hiroshi Ishii, Matthew Blackshaw, Anthony DeVincenzi and
David Lakatos
NEW LISTING
Peddl creates a localized, perfect market. All offers are broadcasts, allowing users
to spot trends, bargains, and opportunities. With GPS- and Internet-enabled mobile
devices in almost every pocket, we see an opportunity for a new type of
marketplace which takes into account your physical location, availability, and open
negotiation. Like other real-time activities, we are exploring transactions as an
organizing principle among people that, like Barter, may be strong, rich, and
long-lived.
101. Radical Atoms Hiroshi Ishii, Leonardo Bonanni, Keywon Chung, Sean Follmer, Jinha Lee,
Daniel Leithinger and Xiao Xiao
Alumni Contributors: Keywon Chung, Adam Kumpf, Amanda Parkes, Hayes Raffle
and Jamie B Zigelbaum
102. Recompose Hiroshi Ishii, Matthew Blackshaw, Anthony DeVincenzi and David Lakatos
Human beings have long shaped the physical environment to reflect designs of form
and function. As an instrument of control, the human hand remains the most
fundamental interface for affecting the material world. In the wake of the digital
revolution, this is changing, bringing us to reexamine tangible interfaces. What if we
could now dynamically reshape, redesign, and restructure our environment using
the functional nature of digital tools? To address this, we present Recompose, a
framework allowing direct and gestural manipulation of our physical environment.
Recompose complements the highly precise, yet concentrated affordance of direct
manipulation with a set of gestures, allowing functional manipulation of an actuated
surface.
Relief is an actuated tabletop display, able to render and animate 3D shapes with a
malleable surface. It allows users to experience and form digital models such as
geographical terrain in an intuitive manner. The tabletop surface is actuated by an
array of motorized pins, which can be addressed individually and sense user input
like pulling and pushing. Our current research focuses on utilizing freehand
gestures for interacting with content on Relief.
104. RopeRevolution Jason Spingarn-Koff (MIT), Hiroshi Ishii, Sayamindu Dasgupta, Lining Yao,
Nadia Cheng (MIT Mechanical Engineering) and Ostap Rudakevych (Harvard
University Graduate School of Design)
Sensetable is a system that wirelessly, quickly, and accurately tracks the positions
of multiple objects on a flat display surface. The tracked objects have a digital state,
which can be controlled by physically modifying them using dials or tokens. We
have developed several new interaction techniques and applications on top of this
platform. Our current work focuses on business supply-chain visualization using
system-dynamics simulation.
Alumni Contributors: Jason Alonso, Dan Chak, Gian Antonio Pangaro, James
Patten and Matt Reynolds
108. T(ether) Hiroshi Ishii, Andy Lippman, Matthew Blackshaw and David Lakatos
T(ether) is a novel spatially aware display that supports intuitive interaction with
volumetric data. The display acts as a window affording users a perspective view of
three- dimensional data through tracking of head position and orientation. T(ether)
creates a 1:1 mapping between real and virtual coordinate space allowing
immersive exploration of the joint domain. Our system creates a shared workspace
in which co-located or remote users can collaborate in both the real and virtual
worlds. The system allows input through capacitive touch on the display and a
motion-tracked glove. When placed behind the display, the user’s hand extends into
the virtual world, enabling the user to interact with objects directly.
109. Tangible Bits Hiroshi Ishii, Sean Follmer, Jinha Lee, Daniel Leithinger and Xiao Xiao
People have developed sophisticated skills for sensing and manipulating our
physical environments, but traditional GUIs (Graphical User Interfaces) do not
employ most of them. Tangible Bits builds upon these skills by giving physical form
to digital information, seamlessly coupling the worlds of bits and atoms. We are
designing "tangible user interfaces" that employ physical objects, surfaces, and
spaces as tangible embodiments of digital information. These include foreground
interactions with graspable objects and augmented surfaces, exploiting the human
Alumni Contributors: Yao Wang, Mike Ananny, Scott Brave, Dan Chak, Angela
Chang, Seung-Ho Choo, Keywon Chung, Andrew Dahley, Philipp Frei, Matthew G.
Gorbet, Adam Kumpf, Jean-Baptiste Labrune, Vincent Leclerc, Jae-Chol Lee, Ali
Mazalek, Gian Antonio Pangaro, Amanda Parkes, Ben Piper, Hayes Raffle, Sandia
Ren, Kimiko Ryokai, Victor Su, Brygg Ullmer, Catherine Vaucelle, Craig Wisneski,
Paul Yarin and Jamie B Zigelbaum
111. Video Play Sean Follmer, Hayes Raffle and Hiroshi Ishii
112. GeneFab Bram Sterling, Kelly Chang, Joseph M. Jacobson, Peter Carr, Brian Chow,
David Sun Kong, Michael Oh and Sam Hwang
What would you like to "build with biology"? The goal of the GeneFab projects is to
develop technology for the rapid fabrication of large DNA molecules, with
composition specified directly by the user. Our intent is to facilitate the field of
Synthetic Biology as it moves from a focus on single genes to designing complete
biochemical pathways, genetic networks, and more complex systems. Sub-projects
include: DNA error correction, microfluidics for high throughput gene synthesis, and
genome-scale engineering (rE. coli).
We are developing techniques to use the focused ion beam to program the
fabrication of nanowires based nanostructures and logic devices.
115. The Dog Salman Ahmad, Zahan Malkani and Sepandar Kamvar
Programming
Dog is a new programming language that makes it easy and intuitive to create
Language
social applications. Dog focuses on a unique and small set of features that allows it
to achieve the power of a full-blown application development framework. One of
NEW LISTING Dog’s key features is built-in support for interacting with people. Dog provides a
natural framework in which both people and computers can be given instructions
and return results. It can perform a long-running computation while also displaying
messages, requesting information, or even sending operations to particular
individuals or groups. By switching between machine and human computation,
developers can create powerful workflows and model complex social processes
without worrying about low-level technical details.
119. BTNz! Kent Larson, Andy Lippman, Shaun David Salzberg, Dan Sawada and
Jonathan Speiser
NEW LISTING
We are constructing a lightweight, viral interface consisting of a button and screen
strategically positioned around public spaces to foster social interactions. Users will
be able to upload messages for display on the screen when the button is pushed.
The idea is to explore if a simple, one-dimensional input device and a small output
device can be powerful enough to encourage people to share information about
their shared space and spur joint social activities. The work includes building an
application environment and collecting and analyzing data on the emergent social
activities. Later work may involve tying identity to button-pushers and providing
more context-aware messages to the users.
120. CityCar Ryan C.C. Chin, William Lark, Jr., Nicholas Pennycooke, Praveen Subramani,
and Kent Larson
Alumni Contributors: Patrik Kunzler, Philip Liang, William J. Mitchell and Raul-David
Poblano
122. CityCar Half-Scale Kent Larson, Nicholas David Pennycooke and Praveen Subramani
Prototype
The CityCar half-scale prototype has been redesigned from the ground up to
incorporate the latest materials and manufacturing processes, sensing
NEW LISTING
technologies, battery systems, and more. This new prototype demonstrates the
functional features of the CityCar at half-scale, including the folding chassis. New
sensing systems have been embedded to enable research into autonomous driving
and parking, while lithium batteries will provide extended range. A new control
system based on microprocessors allows for faster boot time and modularity of the
control system architecture.
123. CityCar Kent Larson, Nicholas David Pennycooke and Praveen Subramani
Ingress-Egress Model
The CityCar Ingress-Egress Model provides a full-scale platform for testing front
ingress and egress for new vehicle types. The platform features three levels of
NEW LISTING
actuation for controlling the movement of seats within a folding vehicle, and can
store custom presets of seat positioning and folding process for different users.
124. CityCar Testing William Lark, Jr., Nicholas Pennycooke, Ryan C.C. Chin and Kent Larson
Platform
The CityCar Testing Platform is a full-scale and modular vehicle that consists of four
independently controlled Wheel Robots, an extruded aluminum frame, battery pack,
driver's interface, and seating for two. Each Wheel Robot is capable of over 120
degrees of steering freedom, thus giving the CityCar chassis omni-directional
driving ability such as sideways parking, zero-radius turning, torque steering, and
variable velocity (in each wheel) steering. This four-wheeler is an experimental
platform for by-wire controls (non-mechanically coupled controls) for the Wheel
Robots, thus allowing for the platform to be controlled by wireless joysticks. The
four-wheeler also allows the CityCar design team to experiment with highly
personalized body/cabin designs. (Continuing the vision of William J. Mitchell.)
125. CityHealth and Indoor Rich Fletcher, Jason Nawyn, and Kent Larson
Environment
The spaces in which we live and work have a strong affect on our physical and
mental health. In addition to obvious effects on physical illness and healing, the
NEW LISTING
quality of our air, the intensity of sound, and the color of our artificial lighting have
also been shown to be important factors that affect cognitive skills, stress levels,
motivation, and work productivity. As a research tool, we have developed small,
wireless, wearable sensors that enable us to simultaneously monitor our
environment and our physiology in real time. By better understanding these
environmental factors, we can design architectural spaces that automatically adapt
to the needs of specific human activities (work/concentration, social relaxation) and
automatically provide for specific health requirements (physical illness, assisted
living).
We demonstrate how the CityHome, which has a very small footprint (840 square
feet), can function as an apartment two to three times that size. This is achieved
through a transformable wall system which integrates furniture, storage, exercise
equipment, lighting, office equipment, and entertainment systems. One potential
scenario for the CityHome is where the bedroom transforms to a home gym, the
living room to a dinner party space for 14 people, a suite for four guests, two
separate office spaces plus a meeting space, or an a open loft space for a large
party. Finally, the kitchen can either be open to the living space, or closed off to be
used as a catering kitchen. Each occupant engages in a process to personalize the
precise design of the wall units according to his or her unique activities and
requirements.
127. CityHome: RoboWall Kent Larson, Hasier Larrea and Carlos Olabarri
Have you ever been in a teleconference and found it difficult to deliver what you’ve
NEW LISTING
been writing/sketching on paper to the remote participant? FlickInk reinvents
paper/pen-based interaction and enables your notes to jump from paper to physical
surroundings as well as to a remote destination. With a quick flick of the pen, it
allows you to naturally “throw” your handwriting to remote collaborators whenever
you're ready. While the contents are sharable in real time as you write, you maintain
control of what's shared and what's private. Control over authorship and privacy is
enhanced as this paper-based media comes accessible and natural in remote
collaboration. Not only in the context of collaboration, FlickInk also seamlessly
transfers writings/sketches on paper to specified physical objects. We aim to
enhance this novel interaction to enrich highly personalized dynamic experiences
for living-working space in the future.
130. Hiriko CityCar Urban Kent Larson, Chih-Chao Chuang and Ryan C.C. Chin
Feasibility Studies
We are engaging in research that may be incorporated by Denokinn into a feasibility
study for Mobility-on-Demand (MoD) systems in a select number of cities, including
NEW LISTING
Berlin, Barcelona, Malmo, and San Francisco. The goal of the project is to propose
electric mobility car-sharing pilot programs to collaborated cities, which will work
with their existing public infrastructure, use Hiriko CityCar as the primary electric
vehicle, and to study how this system will work with the urbanscape and lifestyle in
different cities.
We are working with Denokinn to design and develop an integrated modular system
for assembly and distribution of the CityCar. This project, based in the Basque
region of Spain, will be called the "Hiriko" Project, which stands for Urban Car (Hiri =
urban, Ko = coche or car in Basque). The goal of the Hiriko project is to create a
new, distributed manufacturing system for the CityCar which will enable automotive
suppliers to provide "core" components made of integrated modules such as
in-wheel motor units, battery systems, interiors, vehicle control systems, vehicle
chassis/exoskeleton, and glazing. A full-scale working prototype will be completed
by the end of 2011 with an additional 20 prototypes to be built for testing in 2012.
(Continuing the vision of William J. Mitchell).
133. HomeMaestro Kent Larson, Shaun David Salzberg and Microsoft Research
135. Intelligent Chris Post, Raul-David Poblano, Ryan C.C. Chin, and Kent Larson
Autonomous Parking
In an urban environment, space is a valuable commodity. Current parking structures
Environment
must allow each driver to independently navigate the parking structure to find a
space. As next-generation vehicles turn more and more to drive-by-wire systems,
though, direct human interaction will not be necessary for vehicle movement. An
intelligent parking environment can use drive-by-wire technology to take the burden
136. Mass-Personalized Kent Larson, Ryan C.C. Chin, Daniel John Smithwick and Tyrone L. Yang
Solutions for the
The housing, mobility, and health needs of the elderly are diverse, but current
Elderly
products and services are generic, disconnected from context, difficult to access
without specialized guidance, and do not anticipate changing life circumstances. We
NEW LISTING are creating a platform for delivering integrated, personalized solutions to help aging
individuals remain healthy, autonomous, productive, and engaged. We are
developing new ways to assess specific individual needs and create
mass-customized solutions. We are also developing new systems and standards for
construction that will enable the delivery of more responsive homes, products, and
services; these standards will make possible cost-effective but sophisticated,
interoperable building components and systems. For instance, daylighting controls
will be coordinated with reconfigurable rooms and will accommodate glare
sensitivity. These construction standards will enable industrial suppliers to easily
upgrade and retrofit homes to better care for home occupants as their needs
change over time.
137. Media Lab Energy Praveen Subramani, Raul-David Poblano, Ryan C.C. Chin, Kent Larson and
and Charging Schneider Electric
Research Station
We are collaborating with Schneider Electric to develop a rapid, high-power
charging station in MIT's Stata Center for researching EV rapid charging and battery
storage systems for the electric grid. The system is built on a 500 kW commercial
uninterruptible power supply (UPS) designed by Schneider Electric and modified by
Media Lab researchers to enable rapid power transfer from lead-acid batteries in
the UPS to lithium-ion batteries onboard an electric vehicle. Research experiments
include: exploration of DC battery banks for intermediate energy storage between
the grid and vehicles; repurposing the lead acid batteries in UPS systems with
lithium-ion cells; and exploration of Level III charging connectors, wireless charging,
and user-interface design for connecting the vehicles to physical infrastructure. The
station is scheduled for completion by early 2012 and will be among the most
advanced battery and EV charging research platforms at a university.
139. Mobility on Demand Kent Larson, Ryan C.C. Chin, Chih-Chao Chuang, William Lark, Jr., Brandon
Systems Phillip Martin-Anderson and SiZhi Zhou
Operator is an AI agent that keeps tabs on how things are running around town, and
NEW LISTING
tells you how to get where you want to go in the least effortful of ways.
143. PlaceLab and BoxLab Jason Nawyn, Stephen Intille and Kent Larson
145. Robotic Facade / Harrison Hall, Kent Larson and Shaun David Salzberg
Personalized Sunlight
The robotic façade is conceived as a mass-customizable module that combines
solar control, heating, cooling, ventilation, and other functions to serve an urban
NEW LISTING
apartment. It attaches to the building “chassis” with standardized power, data, and
mechanical attachments to simplify field installation and dramatically increase
energy performance. The design makes use of an articulating mirror to direct shafts
of sunlight to precise points in the apartment interior. Tiny, low-cost, easily installed
wireless sensors and activity recognition algorithms allow occupants to use a mobile
phone interface to map activities of daily living to personalized sunlight positions.
We are also developing strategies to control LED luminaires to turn off, dim, or tune
the lighting to more energy-efficient spectra in response to the location, activities,
and paths of the occupants.
146. SeedPod: Interactive Jennifer Broutin Farah, Colin Carew, Rich Fletcher and Kent Larson
Farming Module
SeedPod is an interactive farming system that assists everyday people in reliably
producing healthy food in urban areas. SeedPod is a scalable, modular system
NEW LISTING
augmented by technology such as monitoring sensors, networked components, and
smart mobile applications to facilitate ease and a deeper understanding of the
process through which aeroponic vegetables are grown. We believe that SeedPod
serves as a platform for closing the loop between people and food.
147. Shortest Path Tree Kent Larson and Brandon Phillip Martin-Anderson
148. Smart Customization Ryan C. C. Chin, Daniel Smithwick and Kent Larson
of Men's Dress
Sanders Consulting’s 2005 ground-breaking research, “Why Mass Customization is
Shirts: A Study on
the Ultimate Lean Manufacturing System” showed that the best standard
Environmental Impact mass-production practices when framed from the point of view of the entire product
lifecycle–from raw material production to point of purchase–was actually very
inefficient and indeed wasteful in terms of energy, material use, and time. Our
research examines the environmental impacts when applying mass customization
methodologies to men's custom dress shirts. This study traces the production,
distribution, sale, and customer-use of the product, in order to discover key areas of
waste and opportunities for improvement. Our comparative study examines not only
the energy and carbon emissions due production and distribution, but also customer
acquisition and use, by using RFID tag technology to track shirt utilization of over 20
subjects over a three-month period.
149. Smart DC MicroGrid Kent Larson and Christophe Yoh Charles Meyers
Given the increasing development of renewable energy, its integration into the
NEW LISTING
electric distribution grid needs to be addressed. In addition, the majority of
household appliances operate on DC. The aim of this project is to develop a
microgrid capable of addressing these issues, while drawing on a smart control
system.
152. Wheel Robots William Lark, Jr., Nicholas Pennycooke, Ryan C.C. Chin and Kent Larson
The nature of work is rapidly changing, but designers have a poor understanding of
how places of work affect interaction, creativity, and productivity. We are using
mobile phones that ask context-triggered questions and sensors in workplaces to
collect information about how spaces are used and how space influences feelings
such as productivity and creativity. A pilot study took place at the Steelcase
headquarters in 2007, and in the offices of EGO, Inc. in Helsinki, Finland 2009. (A
House_n Research Consortium project funded by TEKES.)
158. Goal-Oriented Henry Lieberman, Karthik Dinakar, Christopher Fry, Dustin Arthur Smith, Hal
Interfaces for Mobile Abelson and Venky Raju
Phones
Contemporary mobile phones provide a vast array of capabilities in so-called
"apps," but currently each app lives in its own little world, with its own interface.
NEW LISTING Apps are usually unable to communicate with each other and unable to cooperate
to meet users' needs. This project intends to enable end-users to "program" their
phones using natural language and speech recognition to perform complex tasks. A
user, for example, could say: "Send the song I play most often to Bill." The phone
should realize that an MP3 player holds songs, and that the MP3 app has a function
to order songs by play frequency. It should know how to send a file to another user,
and how to look up the user's contact information. We use state-of-the art natural
language understanding, commonsense reasoning, and a partial-order planner.
What motivates people? What changes do people want in the world? We approach
questions of this kind by mining goals and plans from text-based websites: wikiHow,
eHow, 43things, to-do lists, and commonsense knowledge bases. 43things tells us
about people's long term ambitions. How-to instructions and to-do lists tell us about
everyday activities. We've analyzed the corpus to find out which goals are most
popular, controversial, and concealed. The resulting goal network can be used for
plan recognition, natural language understanding, and building intelligent interfaces
that understand why they are being used. Come by and learn about how you can
use this knowledge about actions/goals, their properties (cost, duration, location)
and their relations in your own applications.
163. Multi-Lingual Hyemin Chung, Jaewoo Chung, Wonsik Kim, Sung Hyon Myaeng and Walter
ConceptNet Bender
164. Multilingual Common Aparecido Fabiano Pinatti de Carvalho, Jesus Savage Carmona, Marie
Sense Tsutsumi, Junia Anacleto, Henry Lieberman, Jason Alonso, Kenneth Arnold,
Robert Speer, Vania Paula de Almeida and Veronica Arreola Rios
Alumni Contributors: Hyemin Chung, Jose H. Espinosa, Wonsik Kim and Yu-Te
Shen
Language interpretation requires going beyond the words to derive what the
NEW LISTING
speaker meant–cooperatively making 'leaps of faith' and putting forth assumptions
that can later be revised or redacted. Current natural language interfaces are
opaque; when interpretation goes wrong–which it inevitably does–the human is left
without recourse. The Open Interpreter project brings the assumptions involved with
interpreting English event descriptions into the user interface, so people can
participate in teaching the computer to derive the same common-sense
assumptions that they expected. We show the immediate applications for an
intelligent calendaring application.
171. Ruminati: Tackling Karthik Dinakar, Henry Lieberman, and Birago Jones
Cyberbullying with
The scourge of cyberbullying has assumed worrisome proportions with an
Computational
ever-increasing number of adolescents admitting to having dealt with it either as a
Empathy victim or bystander. Anonymity and the lack of meaningful supervision in the
electronic medium are two factors that have exacerbated this social menace. This
project explores computational methods from natural language processing and
reflective user interfaces to alleviate this problem.
173. Time Out: Reflective Birago Jones, Henry Lieberman and Karthik Dinakar
User Interface for
Time Out is a experimental user interface system for addressing cyberbullying on
Social Networks
social networks. A Reflective User Interface (RUI) is a novel concept to help users
consider the possible consequences of their online behavior, and assist in
intervention or mitigation of potentially negative/harmful actions.
174. Air Mobs Andy Lippman, Henry Holtzman and Eyal Toledano
177. Barter: A Dawei Shen, Marshall Van Alstyne and Andrew Lippman
Market-Incented
Creative and productive information interchange in organizations is often stymied by
Wisdom Exchange
a perverse incentive setting among the members. We transform that competition
into a positive exchange by using market principles. Specifically, we apply
innovative market mechanisms to construct incentives while still encouraging
pro-social behaviors. Barter includes means to enhance knowledge sharing,
innovation creation, and productivity. It is being tested at MIT and in three sponsor
companies and is becoming available as a readily installable package. We will
measure the results and test the effectiveness of an information market in
addressing organizational challenges. We are learning that transactions in rich
markets can become an organizing principle among people potentially as strong as
social networks.
178. Brin.gy: What Brings Henry Holtzman, Andy Lippman and Polychronis Ypodimatopoulos
Us Together
We allow people to form dynamic groups focused on topics that emerge
serendipitously during everyday life. They can be long-lived or flower for a short
NEW LISTING
time. Examples include people interested in buying the same product, those with
similar expertise, those in the same location, or any collection of such attributes. We
call this the Human Discovery Protocol (HDP). Similar to how computers follow
well-established protocols like DNS in order to find other computers that carry
desired information, HDP presents an open protocol for people to announce bits of
information about themselves, and have them aggregated and returned back in the
form of a group of people that match against the user’s specified criteria. We
experiment with a web-based implementation (brin.gy) that allows users to join and
communicate with groups of people based on their location, profile information, and
items they may want to buy or sell.
179. BTNz! Kent Larson, Andy Lippman, Shaun David Salzberg, Dan Sawada and
Jonathan Speiser
NEW LISTING
We are constructing a lightweight, viral interface consisting of a button and screen
strategically positioned around public spaces to foster social interactions. Users will
be able to upload messages for display on the screen when the button is pushed.
The idea is to explore if a simple, one-dimensional input device and a small output
device can be powerful enough to encourage people to share information about
their shared space and spur joint social activities. The work includes building an
application environment and collecting and analyzing data on the emergent social
activities. Later work may involve tying identity to button-pushers and providing
more context-aware messages to the users.
180. CoCam Henry Holtzman, Andy Lippman, Dan Sawada and Eyal Toledano
Collaborating and media creation are difficult tasks, both for people and for network
NEW LISTING
architectures. CoCam is a self-organizing network for real-time camera image
collaboration. Like all camera apps, just point and shoot; CoCam then automatically
joins other media creators into a network of collaborators. Network discovery,
creation, grouping, joining, and leaving is done automatically in the background,
CoSync builds the ability to create and act jointly into mobile devices . This mirrors
NEW LISTING
the way we as a society act both individually and in concert. CoSync device ecology
combines multiple stand-alone devices and controls them opportunistically as if they
are one distributed, or diffuse, device at the user’s fingertips. CoSync includes a
programming interface that allows time synchronized coordination at a granularity
that will permit watching a movie on one device and hearing the sound from
another. The open API encourages an ever growing set of such finely coordinated
applications.
182. Electric Price Tags Andy Lippman, Matthew Blackshaw and Rick Borovoy
Electric Price Tags are a realization of a mobile system that is linked to technology
in physical space. The underlying theme is that being mobile can mean far more
than focusing on a portable device—it can be the use of that device to unlock data
and technology embedded in the environment. In its current version, users can
reconfigure the price tags on a store shelf to display a desired metric (e.g., price,
unit price, or calories). While this information is present on the boxes of the items for
sale, comparisons would require individual analysis of each box. The visualization
provided by Electric Price Tags allows users to view and filter information in
physical space in ways that was previously possible only online.
184. Line of Sound Grace Rusi Woo, Rick Borovoy and Andy Lippman
We show how data can be used to deliver sound information only in the direction in
which one looks. The demonstration is done using two 55-inch screens which are
transmitting both human and machine relevant information. Each screen is used to
show a video which flashes a single bit indicator which transmits to a camera
mounted on headphones. This is used to distinguish between the two screens, and
to correlate an audio track to the video track.
NewsFlash is a social way to experience the global and local range of current
NEW LISTING
events. People see a tapestry of newspaper front-pages. The headlines and main
photos tell part of the story, NewsFlash tells you the rest. People point their phones
at a headline or picture of interest to bring up a feed of the article text from that
given paper. The data emanates from the screen and and is captured by a cell
phone camera–any number of people can see it at once and discuss the panoply of
ongoing events. NewsFlash creates a local space that is simultaneously interactive
and provocative. We hope it gets people talking.
188. Peddl Andy Lippman, Hiroshi Ishii, Matthew Blackshaw, Anthony DeVincenzi and
David Lakatos
NEW LISTING
Peddl creates a localized, perfect market. All offers are broadcasts, allowing users
to spot trends, bargains, and opportunities. With GPS- and Internet-enabled mobile
devices in almost every pocket, we see an opportunity for a new type of
marketplace which takes into account your physical location, availability, and open
negotiation. Like other real-time activities, we are exploring transactions as an
organizing principle among people that, like Barter, may be strong, rich, and
long-lived.
189. Point & Shoot Data Andy Lippman and Travis Rich
Point & Shoot Data explores the use of visible light as a wireless communication
NEW LISTING
medium for mobile devices. A snap-on case allows users to send messages to
other mobile devices based on directionality and proximity. No email address,
phone number, or account login is needed, just point and shoot your messages!
The project enables infrastructure-free, scalable, proximity-based communication
between two mobile devices.
191. Recompose Hiroshi Ishii, Matthew Blackshaw, Anthony DeVincenzi and David Lakatos
Human beings have long shaped the physical environment to reflect designs of form
and function. As an instrument of control, the human hand remains the most
fundamental interface for affecting the material world. In the wake of the digital
revolution, this is changing, bringing us to reexamine tangible interfaces. What if we
could now dynamically reshape, redesign, and restructure our environment using
the functional nature of digital tools? To address this, we present Recompose, a
framework allowing direct and gestural manipulation of our physical environment.
Recompose complements the highly precise, yet concentrated affordance of direct
manipulation with a set of gestures, allowing functional manipulation of an actuated
surface.
192. Social Andy Lippman, Kwan Lee, Dawei Shen, Eric Shyu and Phumpong
Transactions/Open Watanaprakornkul
Transactions
Social Transactions is an application that allows communities of consumers to
collaboratively sense the market from mobile devices, enabling more informed
financial decisions in a geo-local and timely context. The mobile application not only
allows users to perform transactions, but also to inform, share, and purchase in
groups at desired times. It could, for example, help people connect opportunistically
in a local area to make group purchases, pick up an item for a friend, or perform
reverse auctions. Our framework is an Open Transaction Network that enables
applications from restaurant menu recommendations to electronics purchases. We
tested this with MIT's TechCASH payment system to investigate whether shared
social transactions could provide just-in-time influences to change behaviors.
193. T(ether) Hiroshi Ishii, Andy Lippman, Matthew Blackshaw and David Lakatos
T(ether) is a novel spatially aware display that supports intuitive interaction with
volumetric data. The display acts as a window affording users a perspective view of
three- dimensional data through tracking of head position and orientation. T(ether)
creates a 1:1 mapping between real and virtual coordinate space allowing
immersive exploration of the joint domain. Our system creates a shared workspace
in which co-located or remote users can collaborate in both the real and virtual
worlds. The system allows input through capacitive touch on the display and a
motion-tracked glove. When placed behind the display, the user’s hand extends into
the virtual world, enabling the user to interact with objects directly.
T+1 is an application that creates an iterative structure to help groups organize their
interests and schedules. Users of T+1 receive instructions and send their personal
information through mobile devices at discretized time steps, orchestrated by a
unique, adaptive scheduling engine. At each time-step t, T+1 takes as inputs
several relevant factors of human interactions, such as participants' interests,
opinions, locations, and partner matching schedules. It then computes and
optimizes the structure and format of a group interactions for the next interval. T+1
facilitates consensus formation, better group dynamics, and more engaging user
experiences by using a clearly visible and comprehensible process. We are
planning to deploy the platform in both academic and political discussion settings,
analyze how user opinions and interests evolve in time to understand its efficacy.
195. The Glass Henry Holtzman, Andy Lippman, Matthew Blackshaw, Jon Ferguson,
Infrastructure Catherine Havasi, Julia Ma, Daniel Schultz and Polychronis Ypodimatopoulos
This project builds a social, place-based information window into the Media Lab
using 30 touch-sensitive screens strategically placed throughout the physical
complex and at sponsor sites. The idea is get people to talk among themselves
about the work that they jointly explore in a public place. We present Lab projects
as dynamically connected sets of "charms" that visitors can save, trade, and
explore. The GI demonstrates a framework for an open, integrated IT system and
shows new uses for it.
Alumni Contributors: Rick Borovoy, Greg Elliott and Boris Grigory Kizelshteyn
VR Codes are dynamic data invisibly hidden in television and graphic displays.
NEW LISTING
They allow the display to present simultaneously visual information in an unimpeded
way, and real-time data to a camera. Our intention is to make social displays that
many can use at once; using VR codes, many can draw data from a display and
control its use on a mobile device. We think of VR Codes as analogous to QR
codes for video, and envision a future where every display in the environment
contains latent information embedded in VR codes.
199. Death and the Tod Machover, Ben Bloomberg, Peter Torpey, Elena Jessop, Bob Hsiung,
Powers: Redefining Michael Miller, Akito van Troyer, and Eyal Shahar
Opera
"Death and the Powers" is a groundbreaking opera that brings a variety of
technological, conceptual, and aesthetic innovations to the theatrical world. Created
by Tod Machover (composer), Diane Paulus (director), and Alex McDowell
(production designer), the opera uses the techniques of tomorrow to address
age-old human concerns of life and legacy. The unique performance environment,
including autonomous robots, expressive scenery, new Hyperinstruments, and
human actors, blurs the line between animate and inanimate. The opera premiered
in Monte-Carlo in fall 2010, with additional performances in Boston and Chicago in
2011 and continuing engagements worldwide.
Media Scores extends the concept of a musical score to other modalities to facilitate
NEW LISTING
the process of authoring and performing multimedia compositions, providing a
medium through which to realize a modern-day Gesamtkunstwerk. Through
research into the representation and the encoding of expressive intent, systems for
composing with media scores are being developed. Using such a tool, the
composer will be able to shape an artistic work that may be performed through
208. Remote Theatrical Tod Machover, Punchdrunk, Akito Van Troyer, Ben Bloomberg, Gershon
Immersion: Dublon, Jason Haas, Elena Jessop, Brian Mayton, Eyal Shahar, Jie Qi,
Nicholas Joliat, and Peter Torpey
Extending "Sleep No
More" We are collaborating with London-based theater group Punchdrunk to create an
online platform connected to their NYC show, Sleep No More. In the live show,
NEW LISTING masked audience members explore and interact with a rich environment,
discovering their own narrative pathways. We have developed an online companion
world to this real-life experience, through which online participants partner with live
audience members to explore the interactive, immersive show together. Pushing the
current capabilities of web standards and wireless communications technologies,
the system delivers personalized multimedia content allowing each online
participant to have a unique experience co-created in real time by his own actions
and those of his onsite partner. This project explores original ways of fostering
meaningful relationships between online and onsite audience members, enhancing
the experiences of both through the affordances that exist only at the intersection of
the real and the virtual worlds.
209. Vocal Vibrations: Tod Machover, Elena Jessop, Rebecca Kleinberger, Le Laboratoire, and The
Expressive Dalai Lama Center at MIT
Performance for
Vocal Vibrations is exploring the relationships between human physiology and the
Body-Mind Wellbeing resonant vibrations of the voice. The voice and body are instruments everyone
possesses–they are incredibly individual, infinitely expressive, and intimately linked
NEW LISTING to one's own physical form. In collaboration with Le Laboratoire in Paris and the
Dalai Lama Center at MIT, we are exploring the hypothesis that the singing voice
can influence mental and physical health through physicochemical phenomena and
in ways consistent with contemplative practices. We are developing a series of
multimedia experiences, including individual "meditations," a group "singing circle,"
and an iPad application, all effecting mood modulation and spiritual enhancement in
an enveloping context of stunningly immersive, responsive music. For Fall 2013, we
are developing a vocal art installation in Paris where private "grotto” environments
allow individual visitors to meditate using vibrations generated by their own voice,
augmented by visual, acoustic, and physical stimuli.
210. Augmented Product Natan Linder, Pattie Maes and Rony Kubat
Counter
We have created an augmented reality (AR) based product display counter that
transforms any surface or object into an interactive surface, blending digital media
and information with physical space. This system enables shoppers to conduct
research in the store, learn about product features, and talk to a virtual expert to get
advice via built-in video conferencing. The Augmented Product Counter is based on
LuminAR technology, which can transform any standard product counter, enabling
shoppers to get detailed information on products as well as web access to read
unbiased reviews, compare pricing, and conduct research while they interact with
real products. This system delivers an innovative in-store shopping experience
combining live product interactions in a physical environment with the vast amount
of information available on the web in an engaging and interactive manner.
We believe that in the near future many portable devices will have resizable
NEW LISTING
displays. This will allow for devices with a very compact form factor, which can
unfold into a large display when needed. In this project, we design and study novel
interaction techniques for devices with flexible, rollable, and foldable displays. We
explore a number of scenarios, including personal and collaborative uses.
When we meet new people in real life, we assess them using a multitude of signals
relevant to our upbringing, society, and our experiences and disposition. When we
encounter a new individual virtually, usually we are looking at a single
communication instance in bodiless form. How can we gain a deeper understanding
of this individual without the cues we have in real life? Hyperego aggregates
220. InterPlay: Full-Body Pattie Maes, Seth Hunter and Pol Pla i Conesa
Interaction Platform
InterPlay is a platform for designers to create dynamic social simulations that
transform public spaces into immersive environments where people become the
central agents. It uses computer vision and projection to facilitate full-body
interaction with digital content. The physical world is augmented to create shared
experiences that encourage active play, negotiation, and creative composition.
We are experimenting with systems that blur the boundary between urban lighting
and digital displays in public spaces. These systems consist of liberated pixels,
which are not confined to rigid frames as are typical urban screens. Liberated pixels
can be applied to existing horizontal and vertical surfaces in any configuration, and
communicate with each other to enable a different repertoire of lighting and display
patterns. We have developed Urban Pixels a wireless infrastructure for liberated
pixels. Composed of autonomous units, the system presents a programmable and
distributed interface that is flexible and easy to deploy. Each unit includes an
on-board battery, RF transceiver unit, and microprocessor. The goal is to
incorporate renewable energy sources in future versions.
“Light bodies” are mobile and portable, hand-held lights that respond to audio and
vibration input. The motivation to build these devices is grounded in a historical
reinterpretation of street lighting. Before fixed infrastructure illuminated cities at
night, people carried lanterns with them to make their presence known. Using this
as our starting point, we asked how we might engage people in more actively
shaping the lightscapes which surround them. A first iteration of responsive,
LED-based colored lights were designed for use in three different settings including
a choreographed dance performance, an outdoor public installation and an
audio-visual event.
LuminAR reinvents the traditional incandescent bulb and desk lamp, evolving them
into a new category of robotic, digital information devices. The LuminAR Bulb
combines a Pico-projector, camera, and wireless computer in a compact form
factor. This self-contained system enables users with just-in-time projected
information and a gestural user interface, and it can be screwed into standard light
fixtures everywhere. The LuminAR Lamp is an articulated robotic arm, designed to
interface with the LuminAR Bulb. Both LuminAR form factors dynamically augment
their environments with media and information, while seamlessly connecting with
laptops, mobile phones, and other electronic devices. LuminAR transforms surfaces
and objects into interactive spaces that blend digital media and information with the
physical space. The project radically rethinks the design of traditional lighting
objects, and explores how we can endow them with novel augmented-reality
interfaces.
225. MemTable Pattie Maes, Seth Hunter, Alexandre Milouchev and Emily Zhao
Moving portrait is a framed portrait that is aware of and reacts to viewers’ presence
and body movements. A portrait represents a part of our lives and reflects our
feelings, but it is completely oblivious to the events that occur around it or to the
people who view it. By making a portrait interactive, we create a different and more
engaging relationship between it and the viewer.
MTM "Little John" is a multi-purpose, mid-size, rapid prototyping machine with the
goal of being a personal fabricator capable of performing a variety of tasks (3D
printing, milling, scanning, vinyl cutting) at a price point in the hundreds rather than
thousands of dollars. The machine was designed and built in collaboration with the
MTM—Machines that Make Project at MIT Center for Bits and Atoms.
229. Perifoveal Display Valentin Heun, Anette von Kapri and Pattie Maes
Today's GUIs are made for small screens with little information shown. Real-time
NEW LISTING
data that goes beyond one small screen needs to be continuously scanned with our
eyes in order to create an abstract model of it in one's mind. GUIs therefore do not
work with huge amounts of data. The Perifoveal Display takes this abstraction from
the user and visualizes it in a way that the full range of vision can be used for data
monitoring. This can be realized by taking care of the different visual systems in our
eye. Our vision has a field of view from about 120°, which is highly sensitive for
motion. 6° of our vision is very slow but complex enough to read text.
'PreCursor' is an invisible layer that hovers in front of the screen and enables novel
NEW LISTING
interaction that reaches beyond the current touchscreens. Using a computer mouse
provides two levels of depth when interacting with content on a screen. One can just
hover or can click. Hover allows receiving short descriptions, while click selects or
performs an action. PreCursor provides this missing sense of interaction to
touchscreens. PreCursor technology has the potential to expand beyond a basic
computer screen. It can also be applied to mobile touchscreens to objects in the
real world, or can be the launching pad for creating a 3D space for interaction.
232. Pulp-Based Marcelo Coelho, Pattie Maes, Joanna Berzowska and Lyndl Hall
Computing: A
Pulp-Based Computing is a series of explorations that combine smart materials,
Framework for
papermaking, and printing. By integrating electrically active inks and fibers during
Building Computers the papermaking process, it is possible to create sensors and actuators that
Out of Paper behave, look, and feel like paper. These composite materials not only leverage the
physical and tactile qualities of paper, but can also convey digital information,
spawning new and unexpected application domains in ubiquitous and pervasive
computing at extremely affordable costs.
234. ReachIn Anette von Kapri, Seth Hunter, and Pattie Maes
Remote collaboration systems are still far from offering the same rich experience
NEW LISTING
that collocated meetings provide. Collaborators can transmit their voice and face at
a distance, but it is very hard to point at physical objects and interpret gestures.
ReachIn explores how remote collaborators can "reach into" a shared digital
workspace where they can manipulate virtual objects and data. The collaborators
see their live 3D recreated mesh in a shared virtual space and can point at data or
3D models. They can grab digital objects with their bare hands, and translate, scale,
and rotate them.
237. Sensei: A Mobile Tool Pattie Maes, Suranga Nanayakkara and Roy Shilkrot
for Language
Sensei is a mobile interface for language learning (words, sentences,
Learning
pronunciation). It combines techniques from computer vision, augmented reality,
speech recognition, and commonsense knowledge. In the current prototype, the
user points his cell phone at an object and then sees the word and hears it
pronounced in the language of his choice. The system also shows more information
pulled from a commonsense knowledge base. The interface is primarily designed to
be used as an interactive and fun language-learning tool for children. Future
versions will be applied to other contexts such as real-time language translation for
face-to-face communication and assistance to travelers for reading information
displays in foreign languages; in addition, future versions will provide feedback to
users about whether they are pronouncing words correctly. The project is
implemented on a Samsung Galaxy phone running Android, donated by Samsung
Corporation.
Spotlight is about an artist's ability to create a new meaning using the combination
of interactive portraits and diptych or polyptych layouts. The mere placement of two
or more portraits near each other is a known technique to create a new meaning in
the viewer's mind. Spotlight takes this concept into the interactive domain, creating
interactive portraits that are aware of each other's state and gesture. So not only the
visual layout, but also the interaction with others creates a new meaning for the
viewer. Using a combination of interaction techniques, Spotlight engages the viewer
at two levels. At the group level, the viewer influences the portrait's "social
dynamics." At the individual level, a portrait's "temporal gestures" expose much
about the subject's personality.
With Swyp you can transfer any file from any app to any app on any device: simply
NEW LISTING
with a swipe of a finger. Swyp is a framework facilitating cross-app, cross-device
data exchange using physical "swipe" gestures. Our framework allows any number
of touch-sensing and collocated devices to establish file-exchange and
communications with no pairing other than a physical gesture. With this inherent
physical paradigm, users can immediately grasp the concepts behind
device-to-device communications. Our prototypes application Postcards explore
touch-enabled mobile devices connected to the LuminAR augmented surface
interface. Postcards allows users to collaborate and create a digital postcards using
Swyp interactions. We demonstrate how Swyp enabled interfaces can support new
generation of interactive workspaces possible by allowing pair-free gesture-based
communications to and from collocated devices.
249. Textura Pattie Maes, Marcelo Coelho and Pol Pla i Conesa
thirdEye is a new technique that enables multiple viewers to see different things on
a same display screen at the same time. With thirdEye: a public sign board can
show a Japanese tourist instructions in Japanese and an American in English;
games won't need a split screen anymore—each player can see his or her personal
view of the game on the screen; two people watching TV can watch their favorite
channel on a single screen; a public display can show secret messages or patterns;
and in the same movie theater, people can see different ends of a suspense movie.
254. Watt Watcher Pattie Maes, Sajid Sadi and Eben Kunz
Energy is the backbone of our technological society, yet we have great difficulty
understanding where and how much of it is used. Watt Watcher is a project that
provides in-place feedback on aggregate energy use per device in a format that is
easy to understand and intuitively compare. Energy is inherently invisible, and its
use is often sporadic and difficult to gauge. How much energy does your laptop use
compared to your lamp? Or perhaps your toaster? By giving users some intuition
regarding these basic questions, this ReflectOn allows users both to understand
their use patterns and form new, more informed habits.
255. CollaboRhythm Frank Moss, John Moore MD, Scott Gilroy, Joslin Diabetes Clinic, UMass
Medical School, Department of Veterans Affairs, Children's Hospital Boston,
Boston Medical Center
258. I'm Listening John Moore MD, Henry Lieberman and Frank Moss
Carpal Skin is a prototype for a protective glove to protect against Carpal Tunnel
Syndrome, a medical condition in which the median nerve is compressed at the
wrist, leading to numbness, muscle atrophy, and weakness in the hand. Night-time
wrist splinting is the recommended treatment for most patients before going into
carpal tunnel release surgery. Carpal Skin is a process by which to map the
pain-profile of a particular patient—its intensity and duration—and to distribute hard
and soft materials to fit the patient’s anatomical and physiological requirements,
limiting movement in a customized fashion. The form-generation process is inspired
by animal coating patterns in the control of stiffness variation.
268. FitSocket: A Better Hugh Herr, Neri Oxman, Arthur Petron and Roy Kornbluh (SRI)
Way to Make Sockets
Sockets–the cup-shaped devices that attach an amputated limb to a lower-limb
prosthesis–are made through unscientific, artisanal methods that do not have
repeatable quality and comfort from one amputee to the next. The FitSocket project
aims to identify the correlation between leg tissue properties and the design of a
A fast moving workplace, calls for... a fast moving workstation! The mobile office is
NEW LISTING
a prototype robotic office fitted with a remote controlled, motorized base, onboard
AC power storage for 6-8 hours, and 4 axis robotic arm. The mobile office is great
for taking your work down into the machine shop or to lengthy collaboration
meetings.
French for "single shell," Monocoque stands for a construction technique that
supports structural load using an object's external skin. Contrary to the traditional
design of building skins that distinguish between internal structural frameworks and
non-bearing skin elements, this approach promotes heterogeneity and
differentiation of material properties. The project demonstrates the notion of a
structural skin using a Voronoi pattern, the density of which corresponds to
multi-scalar loading conditions. The distribution of shear-stress lines and surface
pressure is embodied in the allocation and relative thickness of the vein-like
elements built into the skin. Its innovative 3D printing technology provides for the
ability to print parts and assemblies made of multiple materials within a single build,
as well as to create composite materials that present preset combinations of
mechanical properties.
Granular materials can be put into a jammed state through the application of
pressure to achieve a pseudo-solid material with controllable rigidity and geometry.
While jamming principles have been long known, large-scale applications of
jammed structures have not been significantly explored. The possibilities for
shape-changing machines and structures are vast and jamming provides a
plausible mechanism to achieve this effect. In this work, jamming prototypes are
constructed to gain a better understanding of this effect. As well, potential specific
applications are highlighted and demoed. Such applications range from a
morphable chair, to a floor which dynamically changes its softness in response to a
user falling down to reduce injury, to artistic free-form sculpting.
The PCB Origami project is an innovative concept for printing digital materials and
NEW LISTING
creating 3D objects with Rigid-flex PCBs and pick and place machines. These
machines allow printing of digital electronic materials, while controlling the location
and property of each of the components printed. By combining this technology with
Rigid-flex PCB and computational origami, it is possible to create from a single
sheet of PCB almost any 3D shape that is already embedded with electronics, to
produce a finished product with that will be both structural and functional.
277. Responsive Glass Neri Oxman, Elizabeth Tsai, and Michal Firstenberg
Hydrogels are crosslink polymers that are capable of absorbing great amount of
NEW LISTING
water. They have been studied during the last 50 years, largely due to their
hydrophilic character at ambient temperatures, which make them biocompatible and
attractive for various biological applications. Nevertheless, in our project, we are
interested in their hydrophilic-hydrophobic phase-transition, occurring slightly above
room temperature. We investigate the mechanical and optical transformations at
this phase transition–namely, their swelling, permeability, and optical transmission
modification–as enabling ‘responsive’ or ‘passive’ dynamics for future product
design.
279. Shape Memory Inkjet Neil Gershenfeld, Joseph M. Jacobson, Neri Oxman and Benjamin Peters
286. Data-Driven Elevator Joe Paradiso, Gershon Dublon, Nicholas Joliat, Brian Mayton and Ben Houge
Music (MIT Artist in Residence)
Our new building lets us see across spaces, extending our visual perception beyond
NEW LISTING
the walls that enclose us. Yet, invisibly, networks of sensors, from HVAC and
lighting systems to Twitter and RFID, control our environment and capture our
social dynamics. This project proposes extending our senses into this world of
information, imagining the building as glass in every sense. Sensor devices
distributed throughout the Lab transmit privacy-protected audio streams
and real-time measurements of motion, temperature, humidity, and light levels. The
data are composed into an eight-channel audio installation in the glass elevator that
turns these dynamic parameters into music, while microphone streams are
spatialized to simulate their real locations in the building. A pressure sensor in the
elevator provides us with fine-grained altitude to control the spatialization and
sonification. As visitors move from floor to floor, they hear the activities taking place
on each.
289. DoppelLab: Joe Paradiso, Nicholas Joliat, Brian Mayton, Gershon Dublon, and Ben Houge
Spatialized (MIT Artist in Residence)
Sonification in a 3D
In DoppelLab, we are developing tools that intuitively and scalably represent the
Virtual Environment rich, multimodal sensor data produced by a building and its inhabitants. Our aims
transcend the traditional graphical display, in terms of the richness of data conveyed
NEW LISTING and the immersiveness of the user experience. To this end, we have incorporated
3D spatialized data sonification into the DoppelLab application, as well as in
standalone installations. Currently, we virtually spatialize streams of audio recorded
by nodes throughout the physical space. By reversing and shuffling short audio
segments, we distill the sound to its ambient essence while protecting occupant
privacy. In addition to the sampled audio, our work includes abstract data
sonification that conveys multimodal sensor data. As part of this work, we are
collaborating with the internationally active composer and MIT artist-in-residence
Ben Houge, towards new avenues for cross-reality data sonification and aleatoric
musical composition.
290. DoppelLab: Tools for Joe Paradiso, Gershon Dublon, Laurel Smith Pardue, Brian Mayton, Nicholas
Exploring and Joliat, and Noah Swartz
Harnessing
Homes and offices are being filled with sensor networks to answer specific queries
Multimodal Sensor and solve pre-determined problems, but no comprehensive visualization tools exist
Network Data for fusing these disparate data to examine relationships across spaces and sensing
modalities. DoppelLab is an immersive, cross-reality virtual environment that serves
as an active repository of the multimodal sensor data produced by a building and its
inhabitants. We transform architectural models into browsing environments for
real-time sensor data visualization and sonification, as well as open-ended
platforms for building audiovisual applications atop those data. These applications
in turn become sensor-driven interfaces to physical world actuation and control.
DoppelLab encompasses a set of tools for parsing, visualization, sonification, and
application development, and by organizing data by the space from which they
originate, DoppelLab provides a platform to make both broad and specific queries
about the activities, systems, and relationships in a complex, sensor-rich
environment.
292. Feedback Controlled Joe Paradiso, Matthew Henry Aldrich and Nan Zhao
Solid State Lighting
At present, luminous efficacy and cost remain the greatest barriers to broad
adoption of LED lighting. However, it is anticipated that within several years, these
challenges will be overcome. While we may think our basic lighting needs have
been met, this technology offers many more opportunities than just energy
efficiency: this research attempts to alter our expectations for lighting and cast aside
our assumptions about control and performance. We will introduce new, low-cost
sensing modalities that are attuned to human factors such as user context,
circadian rhythms, or productivity, and integrate these data with atypical
environmental factors to move beyond traditional lux measurements. To research
and study these themes, we are focusing on the development of superior
color-rendering systems, new power topologies for LED control, and low-cost
multimodal sensor networks to monitor the lighting network as well as the
environment.
The FreeD is a hand-held, digitally controlled, milling device that is guided and
monitored by a computer while still preserving the craftsperson's freedom to sculpt
and carve. The computer will intervene only when the milling bit approaches the
planned model. Its interaction is either by slowing down the spindle speed or by
drawing back the shaft; the rest of the time it allows complete freedom, letting the
user to manipulate and shape the work in any creative way.
296. Grassroots Mobile Joe Paradiso, Ethan Zuckerman, Pragun Goyal and Nathan Matias
Infrastructure
We want to help people in nations where electric power is scarce sell power to their
neighbors. We’re designing a piece of prototype hardware that plugs into a diesel
NEW LISTING
generator or other power source, distributes the power to multiple outlets, monitors
how much power is used, and uses mobile payments to charge the customer for the
power consumed.
298. Patchwerk: Joe Paradiso, Gershon Dublon, Nicholas Joliat and Brian Mayton
Multi-User Network
Patchwerk is a networked synthesizer module with tightly coupled web browser and
Control of a Massive
tangible interfaces. Patchwerk connects to a pre-existing modular synthesizer using
Modular Synth the emerging cross-platform HTML5 WebSocket standard to enable low-latency,
high-bandwidth, concurrent control of analog signals by multiple users. Online users
NEW LISTING control physical outputs on a custom-designed cabinet that reflects their activity
through a combination of motorized knobs and LEDs, and streams the resultant
audio. In a typical installation, a composer creates a complex physical patch on the
modular synth that exposes a set of analog and digital parameters (knobs, buttons,
toggles, and triggers) to the web-enabled cabinet. Both physically present and
online audiences can control those parameters, simultaneously seeing and hearing
the results of each other's actions. By enabling collaborative interaction with a
massive analog synthesizer, Patchwerk brings a broad audience closer to a rare
and historically important instrument.
301. Scalable and Joe Paradiso, Nan-Wei Gong and Steve Hodges (Microsoft Research
Versatile Surface for Cambridge)
Ubiquitous Sensing
We demonstrate the design and implementation of a new versatile, scalable, and
cost-effective sensate surface. The system is based on a new conductive inkjet
NEW LISTING technology, which allows capacitive sensor electrodes and different types of RF
antennas to be cheaply printed onto a roll of flexible substrate that may be many
meters long. By deploying this surface on (or under) a floor it is possible to detect
the presence and whereabouts of users through both passive and active capacitive
coupling schemes. We have also incorporated GSM and NFC electromagnetic
radiation sensing and piezoelectric pressure and vibration detection. We believe
that this technology has the potential to change the way we think about covering
large areas with sensors and associated electronic circuitry–not just floors, but
potentially desktops, walls, and beyond.
302. TRUSS: Tracking Joe Paradiso, Gershon Dublon and Brian Dean Mayton
Risk with Ubiquitous
We are developing a system for inferring safety context on construction sites by
Smart Sensing
fusing data from wearable devices, distributed sensing infrastructure, and video.
Wearable sensors stream real-time levels of dangerous gases, dust, noise, light
quality, precise altitude, and motion to base stations that synchronize the mobile
devices, monitor the environment, and capture video. Context mined from these
data is used to highlight salient elements in the video stream for monitoring and
decision support in a control room. We tested our system in a initial user study on a
construction site, instrumenting a small number of steel workers and collecting data.
A recently completed hardware revision will be followed by further user testing and
interface development.
306. Economic Alex (Sandy) Pentland, Yaniv Altshuler, Katherine Krumme and Wei Pan
Decision-Making in
We are using credit card transaction data and trading data we look at patterns of
the Wild
human behavior change over time and space, and how these change with social
influence and with macroeconomic features. To what extent do network features
help to predict economic ones?
307. Funf: Open Sensing Alex (Sandy) Pentland, Nadav Aharony, Wei Pan, Cody Sumter and Alan
Framework Gardner
309. Sensible Alex (Sandy) Pentland, Benjamin Waber and Daniel Olguin Olguin
Organizations
Data mining of email has provided important insights into how organizations
function and what management practices lead to greater productivity. But important
communications are almost always face-to-face, so we are missing the greater part
of the picture. Today, however, people carry cell phones and wear RFID badges.
These body-worn sensor networks mean that we can potentially know who talks to
whom, and even how they talk to each other. Sensible Organizations investigates
how these new technologies for sensing human interaction can be used to reinvent
organizations and management.
311. Analysis of Akane Sano, Rosalind W. Picard, Suzanne E. Goldman, Beth A. Malow
Autonomic Sleep (Vanderbilt) Rana el Kaliouby, and Robert Stickgold (Harvard)
Patterns
We are examining autonomic sleep patterns using a wrist-worn biosensor that
enables comfortable measurement of skin conductance, skin temperature, and
motion. The skin conductance reflects sympathetic arousal. We are looking at sleep
patterns in healthy groups, in groups with autism, and in groups with sleep
disorders. We are looking especially at sleep quality and at performance on learning
and memory tasks.
317. Emotion and Memory Daniel McDuff, Rana el Kaliouby and Rosalind Picard
320. Externalization Rosalind W. Picard, Matthew Goodwin and Jackie Chia-Hsun Lee
Toolkit
We propose a set of customizable, easy-to-understand, and low-cost physiological
toolkits in order to enable people to visualize and utilize autonomic arousal
information. In particular, we aim for the toolkits to be usable in one of the most
challenging usability conditions: helping individuals diagnosed with autism. This
toolkit includes: wearable, wireless, heart-rate and skin-conductance sensors;
pendant-like and hand-held physiological indicators hidden or embedded into
certain toys or tools; and a customized software interface that allows caregivers and
parents to establish a general understanding of an individual's arousal profile from
daily life and to set up physiological alarms for events of interest. We are evaluating
the ability of this externalization toolkit to help individuals on the autism spectrum to
better communicate their internal states to trusted teachers and family members.
321. FaceSense: Daniel McDuff, Rana el Kaliouby, Abdelrahman Nasser Mahmoud, Youssef
Affective-Cognitive Kashef, M. Ehsan Hoque, Matthew Goodwin and Rosalind W. Picard
State Inference from
People express and communicate their mental states—such as emotions, thoughts,
Facial Video and desires—through facial expressions, vocal nuances, gestures, and other
non-verbal channels. We have developed a computational model that enables
real-time analysis, tagging, and inference of cognitive-affective mental states from
facial video. This framework combines bottom-up, vision-based processing of the
face (e.g., a head nod or smile) with top-down predictions of mental-state models
(e.g., interest and confusion) to interpret the meaning underlying head and facial
signals over time. Our system tags facial expressions, head gestures, and
affective-cognitive states at multiple spatial and temporal granularities in real time
and offline, in both natural human-human and human-computer interaction contexts.
A version of this system is being made available commercially by Media Lab
spin-off Affectiva, indexing emotion from faces. Applications range from measuring
people's experiences to a training tool for autism spectrum disorders and people
who are nonverbal learning disabled.
325. Gesture Guitar Rosalind W. Picard, Rob Morris and Tod Machover
Emotions are often conveyed through gesture. Instruments that respond to gestures
offer musicians new, exciting modes of musical expression. This project gives
musicians wireless, gestural-based control over guitar effects parameters.
327. Infant Monitoring and Rana el Kaliouby, Rich Fletcher, Matthew Goodwin and Rosalind W. Picard
Communication
We have been developing comfortable, safe, attractive physiological sensors that
infants can wear around the clock to wirelessly communicate their internal
physiological state changes. The sensors capture sympathetic nervous system
arousal, temperature, physical activity, and other physiological indications that can
be processed to signal changes in sleep, arousal, discomfort or distress, all of which
are important for helping parents better understand the internal state of their child
and what things stress or soothe their baby. The technology can also be used to
collect physiological and circadian patterns of data in infants at risk for
developmental disabilities.
333. Multimodal David Forsyth (UIUC), Gregory Abowd (GA Tech), Jim Rehg (GA Tech), Shri
Computational Narayanan (USC), Rana el Kaliouby, Matthew Goodwin, Rosalind W. Picard,
Javier Hernandez Rivera, Stan Scarloff (BU) and Takeo Kanade (CMU)
Behavior Analysis
This project will define and explore a new research area we call Computational
Behavior Science–integrated technologies for multimodal computational sensing
and modeling to capture, measure, analyze, and understand human behaviors. Our
motivating goal is to revolutionize diagnosis and treatment of behavioral and
developmental disorders. Our thesis is that emerging sensing and interpretation
capabilities in vision, audition, and wearable computing technologies, when further
developed and properly integrated, will transform this vision into reality. More
specifically, we hope to: (1) enable widespread autism screening by allowing
non-experts to easily collect high-quality behavioral data and perform initial
assessment of risk status; (2) improve behavioral therapy through increased
availability and improved quality, by making it easier to track the progress of an
intervention and follow guidelines for maximizing learning progress; and (3) enable
longitudinal analysis of a child's development based on quantitative behavioral data,
using new tools for visualization.
338. 6D Display Ramesh Raskar, Martin Fuchs, Hans-Peter Seidel, and Hendrik P. A. Lensch
339. Bokode: Ramesh Raskar, Ankit Mohan, Grace Woo, Shinsaku Hiura and Quinn
Imperceptible Visual Smithwick
Tags for
With over a billion people carrying camera-phones worldwide, we have a new
Camera-Based opportunity to upgrade the classic bar code to encourage a flexible interface
Interaction from a between the machine world and the human world. Current bar codes must be read
Distance within a short range and the codes occupy valuable space on products. We present
a new, low-cost, passive optical design so that bar codes can be shrunk to fewer
than 3mm and can be read by unmodified ordinary cameras several meters away.
340. CATRA: Mapping of Ramesh Raskar, Vitor Pamplona, Erick Passos, Jan Zizka, Jason Boggess,
Cataract Opacities David Schafran, Manuel M. Oliveira, Everett Lawson, and Estebam Clua
Through an
We introduce a novel interactive method to assess cataracts in the human eye by
Interactive Approach crafting an optical solution that measures the perceptual impact of forward
scattering on the foveal region. Current solutions rely on highly trained clinicians to
check the back scattering in the crystallin lens and test their predictions on visual
acuity tests. Close-range parallax barriers create collimated beams of light to scan
through sub-apertures scattering light as it strikes a cataract. User feedback
generates maps for opacity, attenuation, contrast, and local point-spread functions.
The goal is to allow a general audience to operate a portable, high-contrast,
341. Coded Computational Jaewon Kim, Ahmed Kirmani, Ankit Mohan and Ramesh Raskar
Photography
Computational photography is an emerging multi-disciplinary field that is at the
intersection of optics, signal processing, computer graphics and vision, electronics,
art, and online sharing in social networks. The first phase of computational
photography was about building a super-camera that has enhanced performance in
terms of the traditional parameters, such as dynamic range, field of view, or depth of
field. We call this 'Epsilon Photography.' The next phase of computational
photography is building tools that go beyond the capabilities of this super-camera.
We call this 'Coded Photography.' We can code exposure, aperture, motion,
wavelength, and illumination. By blocking light over time or space, we can preserve
more details about the scene in the recorded single photograph.
342. Compressive Sensing Ramesh Raskar, Kshitij Marwah and Ashok Veeraraghavan (MERL)
for Visual Signals
Research in computer vision is riding a new tide called compressive sensing.
Carefully designed capture methods exploit the sparsity of the underlying signal in a
transformed domain to reduce the number of measurements and use an
appropriate reconstruction method. Traditional progressive methods capture
successively more detail using sequence of simple projection basis whereas
random projections do not use any sequence except l0 minimization for
reconstruction which is computationally in-efficient. Here, we question this new tide
and claim for most situations simple methods work better and the best projective
method would be in between the two extremes.
343. Layered 3D: Gordon Wetzstein, Douglas Lanman, Matthew Hirsch, Wolfgang Heidrich, and
Glasses-Free 3D Ramesh Raskar
Printing
We develop tomographic techniques for image synthesis on displays composed of
compact volumes of light-attenuating material. Such volumetric attenuators recreate
NEW LISTING a 4D light field or high-contrast 2D image when illuminated by a uniform backlight.
Since arbitrary views may be inconsistent with any single attenuator, iterative
tomographic reconstruction minimizes the difference between the emitted and target
light fields, subject to physical constraints on attenuation. For 3D displays, spatial
resolution, depth of field, and brightness are increased, compared to parallax
barriers. We conclude by demonstrating the benefits and limitations of
attenuation-based light field displays using an inexpensive fabrication method:
separating multiple printed transparencies with acrylic sheets.
344. LensChat: Sharing Ramesh Raskar, Rob Gens and Wei-Chao Chen
Photos with
With networked cameras in everyone's pockets, we are exploring the practical and
Strangers
creative possibilities of public imaging. LensChat allows cameras to communicate
with each other using trusted optical communications, allowing users to share
photos with a friend by taking pictures of each other, or borrow the perspective and
abilities of many cameras.
Using a femtosecond laser and a camera with a time resolution of about one trillion
frames per second, we can capture movies of light as it moves through a scene,
gets trapped inside a tomato, or bounces off the surfaces in a bottle of water. We
use this ability to see the time of flight and to reconstruct images of objects that our
camera can not see directly (i.e., to look around the corner).
Alumni Contributor: Di Wu
346. NETRA: Smartphone Vitor Pamplona, Manuel Oliveira, Erick Passos, Ankit Mohan, David Schafran,
Add-On for Eye Tests Jason Boggess and Ramesh Raskar
Can a person look at a portable display, click on a few buttons, and recover his
refractive condition? Our optometry solution combines inexpensive optical elements
and interactive software components to create a new optometry device suitable for
developing countries. The technology allows for early, extremely low-cost, mobile,
fast, and automated diagnosis of the most common refractive eye disorders: myopia
(nearsightedness), hypermetropia (farsightedness), astigmatism, and presbyopia
(age-related visual impairment). The patient overlaps lines in up to eight meridians
and the Android app computes the prescription. The average accuracy is
comparable to the prior art—and in some cases, even better. We propose the use
of our technology as a self-evaluation tool for use in homes, schools, and at health
centers in developing countries, and in places where an optometrist is not available
or is too expensive.
347. PhotoCloud: Ramesh Raskar, Aydin Arpa, Otkrist Gupta and Gabriel Taubin
Personal to Shared
We present a near real-time system for interactively exploring a collectively
Moments with Angled
captured moment without explicit 3D reconstruction. Our system favors immediacy
Graphs of Pictures and local coherency to global consistency. It is common to represent photos as
vertices of a weighted graph. The weighted angled graphs of photos used in this
NEW LISTING work can be regarded as the result of discretizing the Riemannian geometry of the
high dimensional manifold of all possible photos. Ultimately, our system enables
everyday people to take advantage of each others' perspectives in order to create
on-the-spot spatiotemporal visual experiences similar to the popular bullet-time
sequence. We believe that this type of application will greatly enhance shared
human experiences spanning from events as personal as parents watching their
children's football game to highly publicized red-carpet galas.
348. Polarization Fields: Douglas Lanman, Gordon Wetzstein, Matthew Hirsch, Wolfgang Heidrich, and
Glasses-Free 3DTV Ramesh Raskar
351. Second Skin: Motion Ramesh Raskar, Kenichiro Fukushi, Christopher Schonauer and Jan Zizka
Capture with
We have created a 3D motion-tracking system with an automatic, real-time
Actuated Feedback
vibrotactile feedback with an assembly of photo-sensors, infrared projector pairs,
for Motor Learning vibration motors, and wearable suit. This system allows us to enhance and quicken
the motor learning process in variety of fields such as healthcare (physiotherapy),
entertainment (dance), and sports (martial arts).
354. Slow Display Daniel Saakes, Kevin Chiu, Tyler Hutchison, Biyeun Buczyk, Naoya Koizumi
and Masahiko Inami
How can we show our 16 megapixel photos from our latest trip on a digital display?
How can we create screens that are visible in direct sunlight as well as complete
darkness? How can we create large displays that consume less than 2W of power?
355. SpeckleSense Alex Olwal, Andrew Bardagjy, Jan Zizka and Ramesh Raskar
Motion sensing is of fundamental importance for user interfaces and input devices.
NEW LISTING
In applications where optical sensing is preferred, traditional camera-based
approaches can be prohibitive due to limited resolution, low frame rates, and the
required computational power for image processing. We introduce a novel set of
motion-sensing configurations based on laser speckle sensing that are particularly
suitable for human-computer interaction. The underlying principles allow these
configurations to be fast, precise, extremely compact, and low cost.
356. Tensor Displays: Gordon Wetzstein, Douglas Lanman, Matthew Hirsch and Ramesh Raskar
High-Quality
We introduce tensor displays: a family of glasses-free 3D displays comprising all
Glasses-Free 3D TV
architectures employing (a stack of) time-multiplexed LCDs illuminated by uniform
or directional backlighting. We introduce a unified optimization framework that
NEW LISTING encompasses all tensor display architectures and allows for optimal glasses-free 3D
display. We demonstrate the benefits of tensor displays by constructing a
reconfigurable prototype using modified LCD panels and a custom integral imaging
backlight. Our efficient, GPU-based NTF implementation enables interactive
applications. In our experiments we show that tensor displays reveal practical
architectures with greater depths of field, wider fields of view, and thinner form
factors, compared to prior automultiscopic displays.
357. Theory Unifying Ray George Barbastathis, Ramesh Raskar, Belen Masia, Se Baek Oh and Tom
and Wavefront Cuypers
Lightfield
This work focuses on bringing powerful concepts from wave optics to the creation of
Propagation new algorithms and applications for computer vision and graphics. Specifically,
ray-based, 4D lightfield representation, based on simple 3D geometric principles,
has led to a range of new applications that include digital refocusing, depth
estimation, synthetic aperture, and glare reduction within a camera or using an
array of cameras. The lightfield representation, however, is inadequate to describe
interactions with diffractive or phase-sensitive optical elements. Therefore we use
Fourier optics principles to represent wavefronts with additional phase information.
We introduce a key modification to the ray-based model to support modeling of
wave phenomena. The two key ideas are "negative radiance" and a "virtual light
projector." This involves exploiting higher dimensional representation of light
transport.
358. Trillion Frames Per Ramesh Raskar, Andreas Velten, Everett Lawson, Di Wu, and Moungi G.
Second Camera Bawendi
361. App Inventor Hal Abelson, Eric Klopfer, Mitchel Resnick, Leo Burd, Andrew McKinney,
Shaileen Pokress, CSAIL and Scheller Teacher Education Program
NEW LISTING
The Center for Mobile Learning is driven by a vision that people should be able to
experience mobile technology as creators, not just consumers. One focus of our
activity here is App Inventor, a Web-based program development tool that even
beginners with no prior programming experience can use to create mobile
applications. Work on App Inventor was initiated in Google Research by Hal
Abelson and is continuing at the MIT Media Lab as a collaboration with the
Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Scheller
Teacher Education Program (STEP).
362. Collab Camp Ricarose Roque, Amos Blanton, Natalie Rusk and Mitchel Resnick
363. Computer Clubhouse Mitchel Resnick, Natalie Rusk, Chris Garrity, Claudia Urrea, and Robbie Berg
Alumni Contributors: Leo Burd, Robbin Chapman, Rachel Garber, Tim Gorton,
Michelle Hlubinka and Elisabeth Sylvan
364. Computer Clubhouse Chris Garrity, Natalie Rusk and Mitchel Resnick
Village
The Computer Clubhouse Village is an online community that connects people at
Computer Clubhouse after-school centers around the world. Through the Village,
Clubhouse members and staff (at more than 100 Clubhouses in 21 countries) can
share ideas with one another, get feedback and advice on their projects, and work
together on collaborative design activities.
Drawdio is a pencil that draws music. You can sketch musical instruments on paper
and play them with your finger. Touch your drawings to bring them to life—or
collaborate through skin-to-skin contact. Drawdio works by creating electrical
circuits with graphite and the human body.
More and more computational activities revolve around collecting, accessing, and
NEW LISTING
manipulating large sets of data, but introductory approaches for learning
programming typically are centered around algorithmic concepts and flow of control,
not around data. Computational exploration of data, especially data-sets, has been
usually restricted to predefined operations in spreadsheet software like Microsoft
Excel. This project builds on the Scratch programming language and environment to
allow children to explore data and datasets. With the extensions provided by this
project, children can build Scratch programs to not only manipulate and analyze
data from online sources, but also to collect data through various means such as
surveys and crowd-sourcing. This toolkit will support many different types of
projects like online polls, turn-based multiplayer games, crowd-sourced stories,
visualizations, information widgets, and quiz-type games.
MaKey MaKey lets you transform everyday objects into computer interfaces. Make
NEW LISTING
a game pad out of Play-Doh, a musical instrument out of bananas, or any other
invention you can imagine. It's a little USB device you plug into your computer and
you use it to make your own switches that act like keys on the keyboard: Make +
Key = MaKey MaKey! It’s plug and play. No need for any electronics or
programming skills. Since MaKey MaKey looks to your computer like a regular
mouse and keyboard, it’s automatically compatible with any piece of software you
can think of. It’s great for beginners tinkering and exploring, for experts prototyping
and inventing, and for everybody who wants to playfully transform their world.
Map Scratch is an extension of Scratch that enables kids to program with maps
NEW LISTING
within their Scratch projects. With Map Scratch, kids can create interactive tours,
games, and data visualizations with real-world geographical data and maps.
372. Scratch Mitchel Resnick, John Maloney, Natalie Rusk, Karen Brennan, Champika
Fernanda, Ricarose Roque, Sayamindu Dasgupta, Amos Blanton, Michelle
Chung, Abdulrahman idlbi, Eric Rosenbaum, Brian Silverman, Paula Bonta
Alumni Contributors: Gaia Carini, Margarita Dekoli, Evelyn Eastmond, Amon Millner,
Andres Monroy-Hernandez and Tamara Stern
Scratch Day is a network of face-to-face local gatherings, on the same day in all
parts of the world, where people can meet, share, and learn more about Scratch, a
programming environment that enables people to create their own interactive
stories, games, animations, and simulations. We believe that these types of
face-to-face interactions remain essential for ensuring the accessibility and
sustainability of initiatives such as Scratch. In-person interactions enable richer
forms of communication among individuals, more rapid iteration of ideas, and a
deeper sense of belonging and participation in a community. The first Scratch Day
took place on May 16, 2009, with 120 events in 44 different countries. The second
Scratch Day took place on May 22, 2010.
375. ScratchJr Mitchel Resnick, Marina Bers, Paula Bonta, Brian Silverman and Sayamindu
Dasgupta
NEW LISTING
The ScratchJr project aims to bring the ideas and spirit of Scratch programming
activities to younger children, enabling children ages five to seven to program their
own interactive stories, games, and animation. To make ScratchJr developmentally
appropriate for younger children, we are revising the interface and providing new
structures to help young children learn core math concepts and problem-solving
strategies. We hope to make a version of ScratchJr publicly available in 2013.
376. Singing Fingers Eric Rosenbaum, Jay Silver and Mitchel Resnick
Singing Fingers allows children to fingerpaint with sound. Users paint by touching a
screen with a finger, but color only emerges if a sound is made at the same time. By
touching the painting again, users can play back the sound. This creates a new
level of accessibility for recording, playback, and remixing of sound.
379. HouseFly: Immersive Philip DeCamp, Rony Kubat and Deb Roy
Video Browsing and
HouseFly combines audio-video recordings from multiple cameras and
Data Visualization
microphones to generate an interactive, 3D reconstruction of recorded events.
Developed for use with the longitudinal recordings collected by the Human
Speechome Project, this software enables the user to move freely throughout a
virtual model of a home and to play back events at any time or speed. In addition to
audio and video, the project explores how different kinds of data may be visualized
in a virtual space, including speech transcripts, person tracking data, and retail
transactions.
380. Human Speechome Philip DeCamp, Brandon Roy, Soroush Vosoughi and Deb Roy
Project
The Human Speechome Project is an effort to observe and computationally model
the longitudinal language development of a single child at an unprecedented scale.
To achieve this, we are recording, storing, visualizing, and analyzing communication
and behavior patterns in over 200,000 hours of home video and speech recordings.
The tools that are being developed for mining and learning from hundreds of
terabytes of multimedia data offer the potential for breaking open new business
opportunities for a broad range of industries—from security to Internet commerce.
The living room is the heart of social and communal interactions in a home. Often
present in this space is a screen: the television. When in use, this communal
gathering space brings together people and their interests, and their varying needs
for company, devices, and content. This project focuses on using personal devices
such as mobile phones with the television; the phone serves as a controller and
social interface by offering a channel to convey engagement, laughter, and viewer
comments, and to create remote co-presence.
The "Nominal Group Technique" is a popular way to brainstorm, often executed with
NEW LISTING
Post-it notes and voting stickers. We're reimagining and reimplementing this
technique for online use, for things such as hackathons, design workshops, and
brainstorms across multiple geographies. The best part: everyone can take the
results of the brainstorm with them, and embed it in blogs or websites.
Inspired by the fact that people are communicating more and more through
technology, Flickr This explores ways for people to have emotion-rich conversations
through all kinds of media provided by people and technology—a way for
technology to allow remote people to have conversations more like face-to-face
experiences by grounding them in shared media. Flickr This lets viewable contents
provide structure for a conversation; with a grounding on the viewable contents,
conversation can move between synchronous and asynchronous, and evolve into a
richer collaborative conversation/media.
Calling a person versus calling a place has quite distinctive affordances. With the
NEW LISTING
arrival of mobile phones, the concept of calling has moved from calling a place to
calling a person. Frontdesk proposes a place-based communication tool that is
When friends give directions, they often don't describe the whole route, but instead
provide landmarks along the way which with they think we'll be familiar. Friends can
assume we have certain knowledge because they know our likes and dislikes.
Going My Way attempts to mimic a friend by learning about where you travel,
identifying the areas that are close to the desired destination from your frequent
path, and picking a set of landmarks to allow you to choose a familiar one. When
you select one of the provided landmarks, Going My Way will provide directions
from it to the destination.
389. Indoor Location Chris Schmandt, Jaewoo Chung, Nan-Wei Gong, Wu-Hsi Li and Joe Paradiso
Sensing Using
We present an indoor positioning system that measures location using disturbances
Geo-Magnetism
of the Earth's magnetic field by structural steel elements in a building. The presence
of these large steel members warps the geomagnetic field such that lines of
magnetic force are locally not parallel. We measure the divergence of the lines of
the magnetic force field using e-compass parts with slight physical offsets; these
measurements are used to create local position signatures for later comparison with
values in the same sensors at a location to be measured. We demonstrate accuracy
within one meter 88% of the time in experiments in two buildings and across
multiple floors within the buildings.
Bringing deliberative process and consensus decision making to the 21st century! A
practical set of tools for assisting in meeting structure, deliberative process,
brainstorming, and negotiation. Helping groups to democratically engage with each
other, across geographies and time zones.
OnTheRun is a location-based exercise game designed for the iPhone. The player
assumes the role of a fugitive trying to gather clues to clear his name. The game is
played outdoors while running, and the game creates missions that are tailored to
the player's neighborhood and running ability. The game is primarily an audio
experience, and gameplay involves following turn-by-turn directions, outrunning
virtual enemies, and reaching destinations.
How can one understand and visualize the lifestyle of a person on the other side of
NEW LISTING
the world? Puzzlaef attempts to tackle this question through a mobile picture puzzle
game where users collaboratively solve with pictures from their lifestyles.
Now that mobile phones are starting to have 3D display and capture capabilities,
NEW LISTING
there are opportunities to enable new applications that enhance person-person
communication or person-object interaction. This project explores one such
application: acquiring 3D models of objects using cell phones with stereo cameras.
Such models could serve as shared objects that ground communication in virtual
environments and mirrored worlds or in mobile augmented reality applications.
Exploring your city is a great way to make friends, discover new places, find new
NEW LISTING
interests, and invent yourself. Tagzz is an Android app where everyone collectively
defines the places they visit and the places in turn define them. Tagzz allows you to
discover yourself by discovering places. You tag a spot, create some buzz for it
and, if everyone agrees the spot is 'fun' this bolsters your 'fun' quotient. If everyone
agrees the spot is 'geeky' it pushes up your ‘geeky’ score. Thus emerges your
personal tag cloud. Follow tags to chance upon new places. Find people with similar
'tag clouds' as your own and experience new places together. Create buzz for your
favorite spots and track other buzz to find who has the #bestchocolatecake in town!
400. Tin Can Chris Schmandt, Matthew Donahoe and Drew Harry
401. Tin Can Classroom Chris Schmandt, Drew Harry and Eric Gordon (Emerson College)
Classroom discussions may not seem like an environment that needs a new kind of
supporting technology. But we've found that augmenting classroom discussions
with an iPad-based environment to help promote discussion, keep track of current
and future discussion topics, and create a shared record of class keeps students
engaged and involved with discussion topics, and helps restart the discussion when
conversation lags. Contrary to what you might expect, having another discussion
venue doesn't seem to add to student distraction; rather it tends to focus distracted
students on this backchannel discussion. For the instructor, our system offers
powerful insights into the engagement and interests of students who tend to speak
less in class, which in turn can empower less-active students to contribute in a
venue in which they feel more comfortable.
Between the Bars is a blogging platform for one out of every 142
NEW LISTING
Americans—prisoners—that makes it easy to blog using standard postal mail. It
consists of software tools to make it easy to upload PDF scans of letters,
crowd-sourced transcriptions of the scanned images. Between the Bars includes the
usual full-featured blogging tools including comments, tagging, RSS feeds, and
notifications for friends and family when new posts are available.
403. Codesign Toolkit Sasha Costanza-Chock, Molly Sauter and Becky Hurwitz
404. Controversy Mapper Ethan Zuckerman, Rahul Bhargava, Erhardt Graeff and Matt Stempeck
How does a media controversy become the only thing any of us are talking about?
NEW LISTING
Using the Media Cloud platform, we're reverse-engineering major news stories to
visualize how ideas spread, how media frames change over time, and whose voices
dominate a discussion. We've started with a case study of Trayvon Martin, a
teenager who was shot and killed. His story became major national news...weeks
after his death. Analysis of stories like Trayvon's provide a revealing portrait of our
complicated media ecosystem.
We are actively engaging with community coalitions in order to build their capacity
NEW LISTING
to do their own data visualization and presentation. New computer-based tools are
lowering the barriers of entry for making engaging and creative presentations of
data. Rather than encouraging partnerships with epidemiologists, statisticians, or
programmers, we see an opportunity to build capacity within small community
organizations by using these new tools.
406. Grassroots Mobile Joe Paradiso, Ethan Zuckerman, Pragun Goyal and Nathan Matias
Infrastructure
We want to help people in nations where electric power is scarce sell power to their
neighbors. We’re designing a piece of prototype hardware that plugs into a diesel
NEW LISTING
generator or other power source, distributes the power to multiple outlets, monitors
how much power is used, and uses mobile payments to charge the customer for the
power consumed.
408. Mapping Banned Ethan Zuckerman, American Library Association, Chris Peterson and National
Books Coalition Against Censorship
Books are challenged and banned in public schools and libraries across the
NEW LISTING
country. But which books, where, by whom, and for what reasons? The Mapping
Banned Books project is a partnership between the Center for Civic Media, the
American Library Association, and the National Coalition Against Censorship to a)
visualize existing data on book challenges, b) detect what the existing data doesn't
capture, and c) devise new methods to surface suppressed speech.
Mapping the Globe is a set of interactive visualizations and maps that help us
NEW LISTING
understand where the Boston Globe directs its attention. Media attention matters –
in quantity and quality. It helps determine what we talk about as a public and how
we talk about it. Mapping the Globe tracks where the paper's attention goes and
what that attention looks like across different regional geographies in combination
with diverse data sets like population, crime and income.
410. Media Cloud Hal Roberts, Ethan Zuckerman and David LaRochelle
411. Media Meter Ethan Zuckerman, Nathan Matias, Matt Stempeck, Rahul Bhargava and Dan
Schultz
NEW LISTING
What have you seen in the news this week? And what did you miss? Are you
getting the blend of local, international, political, and sports stories you desire We’re
building a media-tracking platform to empower you, the individual, and news
providers themselves, to see what you’re getting and what you’re missing in your
daily consumption and production of media. The first round of modules developed
for the platform allow you to compare the breakdown of news topics and byline
gender across multiple news sources.
412. New Day New Abdulai Bah, Anjum Asharia, Sasha Costanza-Chock, Rahul Bhargava, Leo
Standard Burd, Rebecca Hurwitz, Marisa Jahn and Rodrigo Davies
413. NewsJack Sasha Costanza-Chock, Henry Holtzman, Ethan Zuckerman and Daniel E.
Schultz
NEW LISTING
NewsJack is a media remixing tool built from Mozilla's Hackasaurus. It allows users
to modify the front pages of news sites, changing language and headlines to
change the news into what they wish it could be.
414. NGO 2.0 Jing Wang, Rongting Zhou, Endy Xie, Shi Song
NGO2.0 is a project grown out of the work of MIT’s New Media Action Lab. The
NEW LISTING
project recognizes that digital media and Web 2.0 are vital to grassroots NGOs in
China. NGOs in China operate under enormous constraints because of their
semi-legal status. Grassroots NGOs cannot compete with governmental affiliated
NGOs for the attention of mainstream media, which leads to difficulties in acquiring
resources and raising awareness of the cause they are promoting. The NGO2.0
Project serves grassroots NGOs in the underdeveloped regions of China, training
them to enhance their digital and social media literacy through Web 2.0 workshops.
The project also rolls out a crowd map to enable the NGO sector and the Corporate
Social Responsibility sector to find out what each sector has accomplished in
producing social good.
416. Social Mirror Ethan Zuckerman, Nathan Matias, Gaia Marcus and Royal Society of Arts
Social Mirror transforms social science research by making offline social network
NEW LISTING
research cheaper, faster, and more reliable. Research on whole life networks
typically involves costly paper forms which take months to process. Social Mirror’s
digital process respects participant privacy while also putting social network
analysis within reach of community research and public service evaluation. By
providing instant feedback to participants, Social Mirror can also invite people to
consider and change their connection to their communities. Our pilot studies have
already shown the benefits for people facing social isolation.
VoIP Drupal is an innovative framework that brings the power of voice and
Internet-telephony to Drupal sites. It can be used to build hybrid applications that
combine regular touchtone phones, web, SMS, Twitter, IM and other
communication tools in a variety of ways, facilitating community outreach and
providing an online presence to those who are illiterate or do not have regular
access to computers. VoIP Drupal will change the way you interact with Drupal,
your phone and the web.
419. Vojo.co Ethan Zuckerman, Sasha Costanza-Chock, Rahul Bhargava, Ed Platt, Becky
Hurwitz, Rodrigo Davies, Alex Goncalves, Denise Cheng and Rogelio Lopez
NEW LISTING
Vojo.co is a hosted mobile blogging platform that makes it easy for people to share
content to the web from mobile phones via voice calls, SMS, or MMS. Our goal is to
make it easier for people in low-income communities to participate in the digital
public sphere. You don't need a smart phone or an app to post blog entries or digital
stories to Vojo - any phone will do. You don't even need internet access: Vojo lets
you create an account via sms and start posting right away. Vojo is powered by the
VozMob Drupal Distribution, a customized version of the popular free and open
source content management system that is being developed through an ongoing
codesign process by day laborers, household workers, and a diverse team from the
Institute of Popular Education of Southern California (IDEPSCA).