0% found this document useful (0 votes)
414 views99 pages

Media Lab: Projects - Fall 2015

Projects

Uploaded by

Nam Hoang Nguyen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
414 views99 pages

Media Lab: Projects - Fall 2015

Projects

Uploaded by

Nam Hoang Nguyen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 99

Massachusetts Institute of Technology

Media Lab
Projects | Fall 2015

MIT Media Lab


Buildings E14 / E15
75 Amherst Street
Cambridge, Massachusetts 02139-4307
[email protected]
http://www.media.mit.edu
617 253-5960

Many of the MIT Media Lab research projects described in the following pages are conducted under the auspices of
sponsor-supported, interdisciplinary Media Lab centers, joint research programs, and initiatives. They are:
Autism & Communication Technology Initiative
The Autism & Communication Technology Initiative utilizes the unique features of the Media Lab to foster the development of
innovative technologies that can enhance and accelerate the pace of autism research and therapy. Researchers are especially
invested in creating technologies that promote communication and independent living by enabling non-autistic people to
understand the ways autistic people are trying to communicate; improving autistic people's ability to use receptive and
expressive language along with other means of functional, non-verbal expression; and providing telemetric support that reduces
reliance on caregivers' physical proximity, yet still enables enriching and natural connectivity as wanted and needed.
Center for Civic Media
Communities need information to make decisions and take action: to provide aid to neighbors in need, to purchase an
environmentally sustainable product and shun a wasteful one, to choose leaders on local and global scales. Communities are
also rich repositories of information and knowledge, and often develop their own innovative tools and practices for information
sharing. Existing systems to inform communities are changing rapidly, and new ecosystems are emerging where old distinctions
like writer/audience and journalist/amateur have collapsed. The Civic Media group is a partnership between the MIT Media Lab
and Comparative Media Studies at MIT. Together, we work to understand these new ecosystems and to build tools and systems
that help communities collect and share information and connect that information to action. We work closely with communities to
understand their needs and strengths, and to develop useful tools together using collaborative design principles. We particularly
focus on tools that can help amplify the voices of communities often excluded from the digital public sphere and connect them
with new audiences, as well as on systems that help us understand media ecologies, augment civic participation, and foster
digital inclusion.
Center for Extreme Bionics
Half of the world's population currently suffers from some form of physical or neurological disability. At some point in our lives, it
is all too likely that a family member or friend will be struck by a limiting or incapacitating condition, from dementia, to the loss of
a limb, to a debilitating disease such as Parkinson's. Today we acknowledge and even "accept" serious physical and mental
impairments as inherent to the human condition. But must these conditions be accepted as "normal"? What if, instead, through
the invention and deployment of novel technologies, we could control biological processes within the body in order to repair or
even eradicate them? What if there were no such thing as human disability? These questions drive the work of Media Lab
faculty members Hugh Herr and Ed Boyden, and MIT Institute Professor Robert Langer, and what has led them and the MIT
Media Lab to propose the establishment of a new Center for Extreme Bionics. This dynamic new interdisciplinary organization
will draw on the existing strengths of research in synthetic neurobiology, biomechatronics, and biomaterials, combined with
enhanced capabilities for design development and prototyping.
Center for Mobile Learning
The Center for Mobile Learning invents and studies new mobile technologies to promote learning anywhere anytime for anyone.
The Center focuses on mobile tools that empower learners to think creatively, collaborate broadly, and develop applications that
are useful to themselves and others around them. The Center's work covers location-aware learning applications, mobile
sensing and data collection, augmented reality gaming, and other educational uses of mobile technologies. The Center s first
major activity will focus on App Inventor, a programming system that makes it easy for learners to create mobile apps by fitting
together puzzle piece-shaped blocks in a web browser.
Center for Terrestrial Sensing
The deeply symbiotic relationship between our planet and ourselves is increasingly mediated by technology. Ubiquitous,
networked sensing has provided the earth with an increasingly sophisticated electronic nervous system. How we connect with,
interpret, visualize, and use the geoscience information shared and gathered is a deep challenge, with transformational
potential. The Center for Terrestrial Sensing aims to address this challenge.

The most current information about our research is available on the MIT Media Lab Web site, at
http://www.media.mit.edu/research/.

MIT Media Lab

October 2015

Page i

Communications Futures Program


The Communications Futures Program conducts research on industry dynamics, technology opportunities, and regulatory issues
that form the basis for communications endeavors of all kinds, from telephony to RFID tags. The program operates through a
series of working groups led jointly by MIT researchers and industry collaborators. It is highly participatory, and its agenda
reflects the interests of member companies that include both traditional stakeholders and innovators. It is jointly directed by
Dave Clark (CSAIL), Charles Fine (Sloan School of Management), and Andrew Lippman (Media Lab).

Page ii

October 2015

MIT Media Lab

The Lab has also organized the following special interest groups (SIGs), which deal with particular subject areas.
Advancing Wellbeing
In contributing to the digital revolution, the Media Lab helped fuel a society where increasing numbers of people are obese,
sedentary, and glued to screens. Our online culture has promoted meaningfulness in terms of online fame and numbers of
viewers, and converted time previously spent building face-to-face relationships into interactions online with people who may not
be who they say they are. What we have helped to create, willingly or not, often diminishes the social-emotional relationships
and activities that promote physical, mental, and social health. Moreover, our workplace culture escalates stress, provides
unlimited caffeine, distributes nutrition-free food, holds back-to-back sedentary meetings, and encourages overnight hackathons
and unhealthy sleep behavior. Without being dystopian about technology, this effort aims to spawn a series of projects that
leverage the many talents and strengths in the Media Lab in order to reshape technology and our workplace to enhance health
and wellbeing.
CE 2.0
Most of us are awash in consumer electronics (CE) devices: from cellphones, to TVs, to dishwashers. They provide us with
information, entertainment, and communications, and assist us in accomplishing our daily tasks. Unfortunately, most are not as
helpful as they could and should be; for the most part, they are dumb, unaware of us or our situations, and often difficult to use.
In addition, most CE devices cannot communicate with our other devices, even when such communication and collaboration
would be of great help. The Consumer Electronics 2.0 initiative (CE 2.0) is a collaboration between the Media Lab and its
sponsor companies to formulate the principles for a new generation of consumer electronics that are highly connected,
seamlessly interoperable, situation-aware, and radically simpler to use. Our goal is to show that as computing and
communication capability seep into more of our everyday devices, these devices do not have to become more confusing and
complex, but rather can become more intelligent in a cooperative and user-friendly way.
City Science
The world is experiencing a period of extreme urbanization. In China alone, 300 million rural inhabitants will move to urban
areas over the next 15 years. This will require building an infrastructure equivalent to the one housing the entire population of
the United States in a matter of a few decades. In the future, cities will account for nearly 90 percent of global population growth,
80 percent of wealth creation, and 60 percent of total energy consumption. Developing better strategies for the creation of new
cities, is therefore, a global imperative. Our need to improve our understanding of cities, however, is pressed not only by the
social relevance of urban environments, but also by the availability of new strategies for city-scale interventions that are enabled
by emerging technologies. Leveraging advances in data analysis, sensor technologies, and urban experiments, City Science will
provide new insights into creating a data-driven approach to urban design and planning. To build the cities that the world needs,
we need a scientific understanding of cities that considers our built environments and the people who inhabit them. Our future
cities will desperately need such understanding.
Connection Science
As more of our personal and public lives become infused and shaped by data from sensors and computing devices, the lines
between the digital and the physical have become increasingly blurred. New possibilities arise, some promising, others alarming,
but both with an inexorable momentum that is supplanting time honored practices and institutions. MIT Connection Science is a
cross-disciplinary effort drawing on the strengths of faculty, departments and researchers across the Institute, to decode the
meaning of this dynamic, at times chaotic, new environment. The initiative will help business executives, investors,
entrepreneurs and policymakers capitalize on the multitude of opportunities unlocked by the new hyperconnected world we live
in.
Ethics
The Ethics Initiative works to foster multidisciplinary program designs and critical conversations around ethics, wellbeing, and
human flourishing. The initiative seeks to create collaborative platforms for scientists, engineers, artists, and policy makers to
optimize designing for humanity....
Future of News
The Future of News is designing, testing, and making creative tools that help newsrooms adapt in a time of rapid change. As
traditional news models erode, we need new models and techniques to reach a world hungry for news, but whose reading and
viewing habits are increasingly splintered. Newsrooms need to create new storytelling techniques, recognizing that the way
users consume news continues to change. Readers and viewers expect personalized content, deeper context, and information
that enables them to influence and change their world. At the same time, newsrooms are seeking new ways to extend their
influence, to amplify their message by navigating new paths for readers and viewers, and to find new methods of delivery. To
tackle these problems, we will work with Media Lab students and the broader MIT community to identify promising projects and
find newsrooms across the country interested in beta-testing those projects.

MIT Media Lab

October 2015

Page iii

Future Storytelling
The Future Storytelling working group at the Media Lab is rethinking storytelling for the 21st century. The group takes a new and
dynamic approach to how we tell our stories, creating new methods, technologies, and learning programs that recognize and
respond to the changing communications landscape. The group builds on the Media Lab's more than 25 years of experience in
developing society-changing technologies for human expression and interactivity. By applying leading-edge technologies to
make stories more interactive, improvisational, and social, researchers are working to transform audiences into active
participants in the storytelling process, bridging the real and virtual worlds, and allowing everyone to make and share their own
unique stories. Research also explores ways to revolutionize imaging and display technologies, including developing
next-generation cameras and programmable studios, making movie production more versatile and economic.
Media Lab Learning
The Media Lab Learning initiative explores new approaches to learning. We study learning across many dimensions, ranging
from neurons to nations, from early childhood to lifelong scholarship, and from human creativity to machine intelligence. The
program is built around a cohort of learning innovators from across the diverse Media Lab groups. We are designing tools and
technologies that change how, when, where, and what we learn; and developing new solutions to enable and enhance learning
everywhere, including at the Media Lab itself. In addition to creating tools and models, the initiative provides non-profit and
for-profit mechanisms to help promising innovations to scale.
Open Agriculture (OpenAG)
The MIT Media Lab Open Agriculture (OpenAG) initiative is on a mission to create healthier, more engaging, and more inventive
future food systems. We believe the precursor to a healthier and more sustainable food system will be the creation of an
open-source ecosystem of food technologies that enable and promote transparency, networked experimentation, education, and
hyper-local production. The OpenAG Initiative brings together partners from industry, government, and academia to develop an
open source "food tech"? research collective for the creation of the global agricultural hardware, software, and data commons.
Together we will build collaborative tools and open technology platforms for the exploration of future food systems.
The Pixel Factory: Data Visualization SIG
The rise of computational methods has generated a new natural resource: data. The Pixel Factory focuses on the creation of
data visualization resources and tools in collaboration with corporate members. The goals of The Pixel Factory are twofold. First,
we will create software resources that will facilitate the development of online data visualization platforms. More importantly, we
will create these resources as a means to learn, as the most valuable outcome of the Pixel Factory will not be the software
resources produced as incredible as these could be but the generation of people it will imbue with the capacity to create these
resources.
Ultimate Media
Visual media has irretrievably lost its lock on the audience but has gained unprecedented opportunity to evolve the platform by
which it is communicated and to become integrated with the social and data worlds in which we live. Ultimate Media is creating a
platform for the invention, creation, and realization of new ways to explore and participate in the media universe. We apply
extremes of access, processing, and interaction to build new media experiences and explorations that permit instant video
blogging, exploration of the universe of news and narrative entertainment, and physical interfaces that allow people to
collaborate around media.

Page iv

October 2015

MIT Media Lab

V. Michael Bove Jr.: Object-Based Media .............................................................................................................................. 1


1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.

3D Telepresence Chair ........................................................................................................................................................


4K Comics ...........................................................................................................................................................................
Aerial Volumetric Light-Field Display ...................................................................................................................................
Ambi-Blinds ..........................................................................................................................................................................
BigBarChart .........................................................................................................................................................................
Bottles&Boxes: Packaging with Sensors .............................................................................................................................
Calliope ................................................................................................................................................................................
Consumer Holo-Video .........................................................................................................................................................
Dressed in Data ...................................................................................................................................................................
DUSK ...................................................................................................................................................................................
EmotiveModeler: An Emotive Form Design CAD Tool ........................................................................................................
Everything Tells a Story .......................................................................................................................................................
Guided-Wave Light Modulator .............................................................................................................................................
Holoshop .............................................................................................................................................................................
Infinity-by-Nine .....................................................................................................................................................................
ListenTree: Audio-Haptic Display in the Natural Environment .............................................................................................
Live Objects .........................................................................................................................................................................
Narratarium ..........................................................................................................................................................................
Networked Playscapes: Dig Deep .......................................................................................................................................
Pillow-Talk ...........................................................................................................................................................................
Programmable Synthetic Hallucinations ..............................................................................................................................
ShAir: A Platform for Mobile Content Sharing .....................................................................................................................
Slam Force Net ....................................................................................................................................................................
Smell Narratives ..................................................................................................................................................................
SurroundVision ....................................................................................................................................................................
Thermal Fishing Bob ............................................................................................................................................................

1
1
1
1
1
1
2
2
2
2
2
3
3
3
3
3
4
4
4
4
4
4
5
5
5
5

Ed Boyden: Synthetic Neurobiology ...................................................................................................................................... 5


27.
28.
29.
30.
31.

Optogenetics and Synthetic Biology Tools ..........................................................................................................................


Prototype Strategies for Treating Brain Disorders ...............................................................................................................
Tools for Mapping the Molecular Structure of the Brain ......................................................................................................
Tools for Recording High-Speed Brain Dynamics ...............................................................................................................
Understanding Normal and Pathological Brain Computations ............................................................................................

5
6
6
6
6

Cynthia Breazeal: Personal Robots ....................................................................................................................................... 7


32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
42.
43.
44.
45.
46.

AIDA: Affective Intelligent Driving Agent .............................................................................................................................. 7


Animal-Robot Interaction ..................................................................................................................................................... 7
Cloud-HRI ............................................................................................................................................................................ 7
DragonBot: Android Phone Robots for Long-Term HRI ...................................................................................................... 7
Global Literacy Tablets ........................................................................................................................................................ 8
Huggable: A Social Robot for Pediatric Care ....................................................................................................................... 8
LightSwarm .......................................................................................................................................................................... 8
Mind-Theoretic Planning for Robots .................................................................................................................................... 8
Robot Learning from Human-Generated Rewards .............................................................................................................. 8
Robotic Language Learning Companions ........................................................................................................................... 9
Robotic Learning Companions ............................................................................................................................................ 9
SHARE: Understanding and Manipulating Attention Using Social Robots .......................................................................... 9
Socially Assistive Robotics: An NSF Expedition in Computing ............................................................................................ 9
Tega: A New Robot Platform for Long-Term Interaction ................................................................................................... 10
TinkRBook: Reinventing the Reading Primer .................................................................................................................... 10

Hugh Herr: Biomechatronics ................................................................................................................................................ 10


47.
48.
49.
50.
51.
52.
53.

Artificial Gastrocnemius .....................................................................................................................................................


Biomimetic Active Prosthesis for Above-Knee Amputees .................................................................................................
Control of Muscle-Actuated Systems via Electrical Stimulation .........................................................................................
Dancing Control System for Bionic Ankle Prosthesis ........................................................................................................
Effect of a Powered Ankle on Shock Absorption and Interfacial Pressure ........................................................................
FitSocket: Measurement for Attaching Objects to People .................................................................................................
FlexSEA: Flexible, Scalable Electronics Architecture for Wearable Robotics Applications ...............................................

MIT Media Lab

October 2015

10
11
11
11
11
12
12

Page v

54.
55.
56.
57.
58.
59.
60.
61.

Human Walking Model Predicts Joint Mechanics, Electromyography, and Mechanical Economy ....................................
Load-Bearing Exoskeleton for Augmentation of Human Running .....................................................................................
Neural Interface Technology for Advanced Prosthetic Limbs ............................................................................................
Powered Ankle-Foot Prosthesis ........................................................................................................................................
Sensor-Fusions for an EMG Controlled Robotic Prosthesis ..............................................................................................
Tethered Robotic System for Understanding Human Movements ..................................................................................
Variable-Impedance Prosthetic (VIPr) Socket Design .......................................................................................................
Volitional Control of a Powered Ankle-Foot Prosthesis .....................................................................................................

12
12
13
13
13
13
14
14

Cesar Hidalgo: Macro Connections ..................................................................................................................................... 14


62.
63.
64.
65.
66.
67.
68.
69.
70.
71.
72.
73.
74.
75.

Collective Memory .............................................................................................................................................................


Data Visualization: The Pixel Factory ................................................................................................................................
DIVE ..................................................................................................................................................................................
FOLD .................................................................................................................................................................................
GIFGIF ...............................................................................................................................................................................
Immersion ..........................................................................................................................................................................
Opus ..................................................................................................................................................................................
Pantheon ...........................................................................................................................................................................
Place Pulse ........................................................................................................................................................................
StreetScore ........................................................................................................................................................................
The Economic Complexity Observatory ...........................................................................................................................
The Language Group Network ..........................................................................................................................................
The Network Impact in Success ........................................................................................................................................
The Privacy Bounds of Human Mobility .............................................................................................................................

14
14
14
15
15
15
15
15
15
16
16
16
16
16

Hiroshi Ishii: Tangible Media ................................................................................................................................................ 17


76.
77.
78.
79.
80.
81.
82.
83.
84.
85.

bioLogic .............................................................................................................................................................................
inFORM .............................................................................................................................................................................
jamSheets: Interacting with Thin Stiffness-Changing Material ..........................................................................................
LineFORM .........................................................................................................................................................................
MirrorFugue .......................................................................................................................................................................
MMODM: Massively Multiplayer Online Drum Machine ....................................................................................................
Pneumatic Shape-Changing Interfaces .............................................................................................................................
Radical Atoms ....................................................................................................................................................................
TRANSFORM ....................................................................................................................................................................
TRANSFORM: Adaptive and Dynamic Furniture ...............................................................................................................

17
17
17
17
18
18
18
18
18
18

Joseph M. Jacobson: Molecular Machines ......................................................................................................................... 19


86.
87.
88.
89.
90.
91.

Context-Aware Biology ......................................................................................................................................................


Context-Aware Pipette .......................................................................................................................................................
GeneFab ............................................................................................................................................................................
NanoFab ............................................................................................................................................................................
Scaling Up DNA Logic and Structures ...............................................................................................................................
Synthetic Photosynthesis ...................................................................................................................................................

19
19
19
19
19
19

Sepandar Kamvar: Social Computing .................................................................................................................................. 20


92.
93.
94.
95.
96.

Microculture .......................................................................................................................................................................
Storyboards .......................................................................................................................................................................
The Dog Programming Language .....................................................................................................................................
Wildflower Montessori ........................................................................................................................................................
You Are Here .....................................................................................................................................................................

20
20
20
20
20

Kent Larson: Changing Places ............................................................................................................................................. 21


97.
98.
99.
100.
101.
102.
103.

Page vi

ARkits: Architectural Robotics Kits ....................................................................................................................................


BoxLab ...............................................................................................................................................................................
CityFARM ..........................................................................................................................................................................
CityHOME: 200 SQ FT .....................................................................................................................................................
CityOffice ...........................................................................................................................................................................
CityScope Barcelona .........................................................................................................................................................
CityScope BostonBRT .......................................................................................................................................................

October 2015

21
21
21
21
21
21
22

MIT Media Lab

104.
105.
106.
107.
108.
109.
110.
111.
112.
113.
114.
115.
116.
117.

CityScope Hamburg ...........................................................................................................................................................


CityScope Mark I: Real-Time Data Observatory ................................................................................................................
CityScope Mark II: Scout ...................................................................................................................................................
CityScope Mark III: Dynamic 3D ........................................................................................................................................
CityScope Mark IV: Playground .........................................................................................................................................
CityScope Mark IVa: Riyadh ..............................................................................................................................................
CityScope Mark IVb: Land Use/Transportation .................................................................................................................
Context-Aware Dynamic Lighting ......................................................................................................................................
Measuring Urban Innovation ..............................................................................................................................................
Mobility on Demand Systems ............................................................................................................................................
Persuasive Cities ...............................................................................................................................................................
Persuasive Electric Vehicle ...............................................................................................................................................
Persuasive Urban Mobility .................................................................................................................................................
ViewCube ..........................................................................................................................................................................

22
22
22
22
22
22
22
23
23
23
23
24
24
24

Andy Lippman: Viral Communications ................................................................................................................................ 24


118.
119.
120.
121.
122.
123.
124.
125.
126.
127.
128.
129.
130.
131.

8K Time into Space ...........................................................................................................................................................


DbDb ..................................................................................................................................................................................
GIFGIF ...............................................................................................................................................................................
IoT Recorder ......................................................................................................................................................................
Me.TV ................................................................................................................................................................................
NewsClouds .......................................................................................................................................................................
Plethora .............................................................................................................................................................................
QUANTIFY .........................................................................................................................................................................
SolarCoin ...........................................................................................................................................................................
Sphera ...............................................................................................................................................................................
SuperGlue ..........................................................................................................................................................................
The Third Hand ..................................................................................................................................................................
VR Codes ..........................................................................................................................................................................
Wall of Now ........................................................................................................................................................................

24
24
25
25
25
25
25
25
26
26
26
26
26
27

Tod Machover: Opera of the Future ..................................................................................................................................... 27


132.
133.
134.
135.
136.
137.
138.
139.
140.
141.
142.
143.
144.
145.
146.
147.
148.
149.
150.
151.

Ambisonic Surround-Sound Audio Compression ..............................................................................................................


Breathing Window ..............................................................................................................................................................
City Symphonies: Massive Musical Collaboration .............................................................................................................
Command Not Found ........................................................................................................................................................
Death and the Powers: Global Interactive Simulcast .........................................................................................................
Death and the Powers: Redefining Opera .........................................................................................................................
Empathy and the Future of Experience .............................................................................................................................
Fensadense .......................................................................................................................................................................
Hi-Lo Conductor .................................................................................................................................................................
Hyperinstruments ...............................................................................................................................................................
Hyperproduction: Advanced Production Systems ............................................................................................................
Hyperscore ........................................................................................................................................................................
Maestro Myth: Exploring the Impact of Conducting Gestures on Musician's Body and Sounding Result .........................
Media Scores .....................................................................................................................................................................
Music Visualization via Musical Information Retrieval .......................................................................................................
Remote Theatrical Immersion: Extending "Sleep No More" ..............................................................................................
Requiem for Rhinoceros ....................................................................................................................................................
Sound Cycles .....................................................................................................................................................................
Using the Voice As a Tool for Self-Reflection ....................................................................................................................
Vocal Vibrations: Expressive Performance for Body-Mind Wellbeing ...............................................................................

27
27
27
27
28
28
28
28
29
29
29
29
30
30
30
30
31
31
31
31

Pattie Maes: Fluid Interfaces ................................................................................................................................................. 31


152.
153.
154.
155.
156.
157.
158.

Augmented Airbrush ..........................................................................................................................................................


Enlight ................................................................................................................................................................................
EyeRing: A Compact, Intelligent Vision System on a Ring ................................................................................................
FingerReader .....................................................................................................................................................................
GlassProv Improv Comedy System ...................................................................................................................................
HandsOn: A Gestural System for Remote Collaboration Using Augmented Reality .........................................................
HRQR ................................................................................................................................................................................

MIT Media Lab

October 2015

32
32
32
32
32
32
33

Page vii

159.
160.
161.
162.
163.
164.
165.
166.
167.
168.
169.
170.
171.
172.
173.
174.

Hybrid Objects ...................................................................................................................................................................


Invisibilia: Revealing Invisible Data as a Tool for Experiential Learning ..........................................................................
JaJan!: Remote Language Learning in Shared Virtual Space ...........................................................................................
LuminAR ............................................................................................................................................................................
MARS: Manufacturing Augmented Reality System ...........................................................................................................
Move Your Glass ...............................................................................................................................................................
Open Hybrid .......................................................................................................................................................................
Reality Editor: Programming Smarter Objects ...................................................................................................................
Remot-IO: A System for Reaching into the Environment of a Remote Collaborator .........................................................
Scanner Grabber ...............................................................................................................................................................
ScreenSpire .......................................................................................................................................................................
ShowMe: Immersive Remote Collaboration System with 3D Hand Gestures ...................................................................
SmileCatcher .....................................................................................................................................................................
STEM Accessibility Tool ....................................................................................................................................................
TagMe ................................................................................................................................................................................
The Challenge ...................................................................................................................................................................

33
33
33
33
33
33
34
34
34
34
34
35
35
35
35
35

Neri Oxman: Mediated Matter ............................................................................................................................................... 36


175.
176.
177.
178.
179.
180.
181.
182.
183.
184.
185.
186.
187.
188.
189.
190.
191.
192.
193.
194.
195.
196.
197.
198.
199.
200.
201.
202.
203.

3D Printing of Functionally Graded Materials ...................................................................................................................


Additive Manufacturing in Glass: Electrosintering and Spark Gap Glass ..........................................................................
Anthozoa ............................................................................................................................................................................
Beast ..................................................................................................................................................................................
Bots of Babel .....................................................................................................................................................................
Building-Scale 3D Printing .................................................................................................................................................
Carpal Skin ........................................................................................................................................................................
CNSILK: Computer Numerically Controlled Silk Cocoon Construction ............................................................................
Digitally Reconfigurable Surface ........................................................................................................................................
FABRICOLOGY: Variable-Property 3D Printing as a Case for Sustainable Fabrication ...................................................
FitSocket: Measurement for Attaching Objects to People .................................................................................................
Functionally Graded Filament-Wound Carbon-Fiber Prosthetic Sockets ..........................................................................
Gemini ...............................................................................................................................................................................
Glass Printing ....................................................................................................................................................................
Lichtenberg 3D Printing .....................................................................................................................................................
Living Glass .......................................................................................................................................................................
Living Mushtari ...................................................................................................................................................................
Meta-Mesh: Computational Model for Design and Fabrication of Biomimetic Scaled Body Armors .................................
Micro-Macro Fluidic Fabrication of a Mid-Sole Running Shoe ...........................................................................................
Mobile Digital Construction Platform (MDCP) ....................................................................................................................
Monocoque ........................................................................................................................................................................
PCB Origami ......................................................................................................................................................................
Printing Living Materials .....................................................................................................................................................
Printing Multi-Material 3D Microfluidics ..............................................................................................................................
Rapid Craft .........................................................................................................................................................................
Raycounting .......................................................................................................................................................................
Silk Pavilion .......................................................................................................................................................................
SpiderBot ...........................................................................................................................................................................
Water-Based Additive Manufacturing ................................................................................................................................

36
36
36
36
37
37
37
37
37
38
38
38
38
39
39
39
39
40
40
40
40
40
41
41
41
41
41
42
42

Sputniko!: Design Fiction ...................................................................................................................................................... 42


204.
205.
206.
207.
208.
209.
210.
211.

(Im)possible Baby ..............................................................................................................................................................


CremateBot: Transform, Reborn, Free .............................................................................................................................
Nostalgic Touch .................................................................................................................................................................
Open Source Estrogen: Housewives Making Drugs ..........................................................................................................
Pop Roach .........................................................................................................................................................................
Teshima Bio-Art Pavilion ..................................................................................................................................................
Tranceflora: Amy's Glowing Silk ........................................................................................................................................
Virgin Birth Simulator .........................................................................................................................................................

Page viii

October 2015

42
42
43
43
43
43
43
44

MIT Media Lab

Joseph Paradiso: Responsive Environments ..................................................................................................................... 44


212.
213.
214.
215.
216.
217.
218.
219.
220.
221.
222.
223.
224.
225.
226.
227.
228.
229.
230.
231.

#QL: Hashtag Query Language .........................................................................................................................................


Chain API ...........................................................................................................................................................................
Circuit Stickers ...................................................................................................................................................................
Circuit Stickers Activity Book .............................................................................................................................................
Circuit Storybook ...............................................................................................................................................................
disCERN: Sonification Platform for High-Energy Physics Data ........................................................................................
DoppelLab: Experiencing Multimodal Sensor Data ...........................................................................................................
Experiential Lighting: New User-Interfaces for Lighting Control ........................................................................................
FingerSynth: Wearable Transducers for Exploring the Environment through Sound ........................................................
Hacking the Sketchbook ....................................................................................................................................................
Halo: Wearable Lighting ....................................................................................................................................................
HearThere: Ubiquitous Sonic Overlay ...............................................................................................................................
ListenTree: Audio-Haptic Display in the Natural Environment ...........................................................................................
Living Observatory: Sensor Networks for Documenting and Experiencing Ecology .........................................................
Mindful Photons: Context-Aware Lighting ..........................................................................................................................
MMODM: Massively Multiplayer Online Drum Machine ....................................................................................................
Mobile, Wearable Sensor Data Visualization .....................................................................................................................
NailO ..................................................................................................................................................................................
Prosthetic Sensor Networks: Factoring Attention, Proprioception, and Sensory Coding ..................................................
SensorChimes: Musical Mapping for Sensor Networks .....................................................................................................

44
44
44
44
44
45
45
45
45
46
46
46
46
47
47
47
47
47
48
48

Alex 'Sandy' Pentland: Human Dynamics ............................................................................................................................ 48


232.
233.
234.
235.
236.
237.
238.
239.
240.
241.
242.

bandicoot: A Python Toolbox for Mobile Phone Metadata .................................................................................................


Data-Pop Alliance ..............................................................................................................................................................
Enigma ...............................................................................................................................................................................
Inducing Peer Pressure to Promote Cooperation ..............................................................................................................
Leveraging Leadership Expertise More Effectively in Organizations .................................................................................
Mobile Territorial Lab .........................................................................................................................................................
On the Reidentifiability of Credit Card Metadata ...............................................................................................................
openPDS/ SaferAnswers: Protecting the Privacy of Metadata ..........................................................................................
Prediction Markets: Leveraging Internal Knowledge to Beat Industry Prediction Experts .................................................
Sensible Organizations ......................................................................................................................................................
The Privacy Bounds of Human Mobility .............................................................................................................................

48
48
48
49
49
49
49
49
50
50
50

Rosalind W. Picard: Affective Computing ........................................................................................................................... 50


243.
244.
245.
246.
247.
248.
249.
250.
251.
252.
253.
254.
255.
256.
257.
258.
259.
260.
261.
262.
263.
264.
265.
266.

Affective Response to Haptic signals ................................................................................................................................


An EEG and Motion-Capture Based Expressive Music Interface for Affective Neurofeedback ........................................
Automated Tongue Analysis ..............................................................................................................................................
Automatic Stress Recognition in Real-Life Settings ..........................................................................................................
Autonomic Nervous System Activity in Epilepsy ................................................................................................................
BioGlass: Physiological Parameter Estimation Using a Head-Mounted Wearable Device ...............................................
BioInsights: Extracting Personal Data from Wearable Motion Sensors .............................................................................
BioWatch: Estimation of Heart and Breathing Rates from Wrist Motions ..........................................................................
Building the Just-Right-Challenge in Games and Toys ....................................................................................................
EDA Explorer .....................................................................................................................................................................
Fathom: Probabilistic Graphical Models to Help Mental Health Counselors .....................................................................
FEEL: A Cloud System for Frequent Event and Biophysiological Signal Labeling ............................................................
Got Sleep? .........................................................................................................................................................................
IDA: Inexpensive Networked Digital Stethoscope .............................................................................................................
Large-Scale Pulse Analysis ...............................................................................................................................................
Lensing: Cardiolinguistics for Atypical Angina ...................................................................................................................
Mapping the Stress of Medical Visits ................................................................................................................................
Measuring Arousal During Therapy for Children with Autism and ADHD ..........................................................................
Mobile Health Interventions for Drug Addiction and PTSD ................................................................................................
Mobisensus: Predicting Your Stress/Mood from Mobile Sensor Data ...............................................................................
Modulating Peripheral and Cortical Arousal Using a Musical Motor Response Task ........................................................
Objective Asessment of Depression and Its Improvement ................................................................................................
Panoply ..............................................................................................................................................................................
PongCam ...........................................................................................................................................................................

MIT Media Lab

October 2015

50
51
51
51
51
52
52
52
52
53
53
53
53
53
54
54
54
54
54
55
55
55
55
55

Page ix

267.
268.
269.
270.
271.
272.
273.
274.
275.
276.
277.

Predicting Students' Wellbeing from Physiology, Phone, Mobility, and Behavioral Data ..................................................
Real-Time Assessment of Suicidal Thoughts and Behaviors ............................................................................................
SmileTracker ......................................................................................................................................................................
SNAPSHOT Expose ..........................................................................................................................................................
SNAPSHOT Study .............................................................................................................................................................
StoryScape ........................................................................................................................................................................
The Challenge ...................................................................................................................................................................
Tributary .............................................................................................................................................................................
Unlocking Sleep .................................................................................................................................................................
Valinor: Mathematical Models to Understand and Predict Self-Harm ................................................................................
Wavelet-Based Motion Artifact Removal for Electrodermal Activity ..................................................................................

56
56
56
56
57
57
57
57
57
58
58

Iyad Rahwan: Scalable Cooperation .................................................................................................................................... 58


278.
279.
280.
281.

Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars? ....................................................
Crowdsourcing a Manhunt .................................................................................................................................................
Crowdsourcing Under Attack .............................................................................................................................................
Honest Crowds ..................................................................................................................................................................

58
58
58
59

Ramesh Raskar: Camera Culture ......................................................................................................................................... 59


282.
283.
284.
285.
286.
287.
288.
289.
290.
291.
292.
293.
294.
295.
296.
297.
298.
299.
300.
301.
302.
303.
304.
305.
306.
307.
308.
309.
310.
311.
312.
313.
314.
315.
316.
317.
318.
319.
320.

Page x

6D Display .........................................................................................................................................................................
A Switchable Light-Field Camera ......................................................................................................................................
Bokode: Imperceptible Visual Tags for Camera-Based Interaction from a Distance .........................................................
CATRA: Mapping of Cataract Opacities Through an Interactive Approach .......................................................................
Coded Computational Photography ...................................................................................................................................
Coded Focal Stack Photography .......................................................................................................................................
Compressive Light-Field Camera: Next Generation in 3D Photography ...........................................................................
Eyeglasses-Free Displays .................................................................................................................................................
Imaging Behind Diffusive Layers .......................................................................................................................................
Imaging through Scattering Media Using Femtophotography ...........................................................................................
Inverse Problems in Time-of-Flight Imaging ......................................................................................................................
Layered 3D: Glasses-Free 3D Printing ..............................................................................................................................
LensChat: Sharing Photos with Strangers .........................................................................................................................
Looking Around Corners ....................................................................................................................................................
NETRA: Smartphone Add-On for Eye Tests .....................................................................................................................
New Methods in Time-of-Flight Imaging ............................................................................................................................
PhotoCloud: Personal to Shared Moments with Angled Graphs of Pictures .....................................................................
Polarization Fields: Glasses-Free 3DTV ............................................................................................................................
Portable Retinal Imaging ...................................................................................................................................................
Reflectance Acquisition Using Ultrafast Imaging ...............................................................................................................
Second Skin: Motion Capture with Actuated Feedback for Motor Learning ......................................................................
Shield Field Imaging ..........................................................................................................................................................
Single Lens Off-Chip Cellphone Microscopy .....................................................................................................................
Single-Photon Sensitive Ultrafast Imaging ........................................................................................................................
Skin Perfusion Photography ..............................................................................................................................................
Slow Display ......................................................................................................................................................................
SpeckleSense ....................................................................................................................................................................
SpecTrans: Classification of Transparent Materials and Interactions ................................................................................
StreetScore ........................................................................................................................................................................
Tensor Displays: High-Quality Glasses-Free 3D TV .........................................................................................................
Theory Unifying Ray and Wavefront Lightfield Propagation ..............................................................................................
Time-of-Flight Microwave Camera .....................................................................................................................................
Towards In-Vivo Biopsy .....................................................................................................................................................
Trillion Frames Per Second Camera .................................................................................................................................
Ultrasound Tomography ....................................................................................................................................................
Unbounded High Dynamic Range Photography Using a Modulo Camera ........................................................................
Vision on Tap .....................................................................................................................................................................
VisionBlocks ......................................................................................................................................................................
Visual Lifelogging ...............................................................................................................................................................

October 2015

59
59
59
60
60
60
60
60
61
61
61
61
61
61
62
62
62
62
63
63
63
63
63
63
64
64
64
64
64
65
65
65
65
65
66
66
66
66
66

MIT Media Lab

Mitchel Resnick: Lifelong Kindergarten .............................................................................................................................. 67


321.
322.
323.
324.
325.
326.
327.
328.
329.
330.
331.
332.
333.
334.
335.
336.
337.
338.
339.
340.
341.
342.
343.
344.
345.
346.

App Inventor ......................................................................................................................................................................


Askii ...................................................................................................................................................................................
Build in Progress ................................................................................................................................................................
Computer Clubhouse .........................................................................................................................................................
Computer Clubhouse Village .............................................................................................................................................
Duct Tape Network ............................................................................................................................................................
Family Creative Learning ...................................................................................................................................................
Learning Creative Learning ...............................................................................................................................................
Learning with Data .............................................................................................................................................................
Lemann Creative Learning Program ..................................................................................................................................
Libranet ..............................................................................................................................................................................
Making Learning Work .......................................................................................................................................................
Making with Stories ............................................................................................................................................................
Media Lab Digital Certificates ............................................................................................................................................
Media Lab Virtual Visit .......................................................................................................................................................
ML Online Learning ...........................................................................................................................................................
Para ...................................................................................................................................................................................
Read Out Loud ..................................................................................................................................................................
Scratch ...............................................................................................................................................................................
Scratch Data Blocks ..........................................................................................................................................................
Scratch Day .......................................................................................................................................................................
Scratch Extensions ............................................................................................................................................................
ScratchJr ............................................................................................................................................................................
Spin ....................................................................................................................................................................................
Start Making! ......................................................................................................................................................................
Unhangout .........................................................................................................................................................................

67
67
67
67
68
68
68
68
68
69
69
69
69
69
69
69
70
70
70
70
71
71
71
71
71
72

Deb Roy: Social Machines .................................................................................................................................................... 72


347.
348.
349.
350.
351.
352.
353.

AINA: Aerial Imaging and Network Analysis ......................................................................................................................


Human Atlas ......................................................................................................................................................................
Journalism Mapping and Analytics Project (JMAP) ...........................................................................................................
Responsive Communities: Pilot Project in Jun, Spain .......................................................................................................
Rumor Gauge: Automatic Detection and Verification of Rumors in Twitter .......................................................................
Social Literacy Learning ....................................................................................................................................................
The Electome: Measuring Responsiveness in the 2016 Election ......................................................................................

72
72
72
73
73
73
73

Chris Schmandt: Living Mobile ............................................................................................................................................ 74


354.
355.
356.
357.
358.
359.
360.
361.
362.

Activ8 .................................................................................................................................................................................
Amphibian: Terrestrial Scuba Diving Using Virtual Reality ................................................................................................
Kaan: Wristband to Educate Deaf Children about Social Norms .......................................................................................
Meta-Physical-Space VR ...................................................................................................................................................
MugShots ...........................................................................................................................................................................
NailO ..................................................................................................................................................................................
OnTheGo ...........................................................................................................................................................................
Spotz ..................................................................................................................................................................................
Variable Reality: Interaction with the Virtual Book .............................................................................................................

74
74
74
74
74
75
75
75
75

Kevin Slavin: Playful Systems .............................................................................................................................................. 75


363.
364.
365.
366.
367.
368.
369.
370.
371.
372.
373.

Tools for Super-Human Time Perception .........................................................................................................................


20 Day Stranger .................................................................................................................................................................
32,768 Times Per Second ................................................................................................................................................
Amino: A Tamagotchi for Synthetic Biology .......................................................................................................................
AutomaTiles .......................................................................................................................................................................
beneath the chip ................................................................................................................................................................
Case and Molly .................................................................................................................................................................
Cordon Sanitaire ................................................................................................................................................................
Darkball ..............................................................................................................................................................................
DeepView: Computational Tools for Chess Spectatorship ................................................................................................
Designing Immersive Multi-Sensory Eating Experiences ..................................................................................................

MIT Media Lab

October 2015

75
76
76
76
76
76
76
77
77
77
77

Page xi

374.
375.
376.
377.
378.
379.
380.
381.
382.
383.

Dice++ ...............................................................................................................................................................................
EyeWire .............................................................................................................................................................................
GAMR ................................................................................................................................................................................
Homeostasis ......................................................................................................................................................................
MicroPsi: An Architecture for Motivated Cognition ............................................................................................................
radiO_o ..............................................................................................................................................................................
Sneak: A Hybrid Digital-Physical Tabletop Game .............................................................................................................
Soft Exchange: Interaction Design with Biological Interfaces ............................................................................................
Storyboards .......................................................................................................................................................................
Troxes ................................................................................................................................................................................

77
78
78
78
78
78
78
79
79
79

Ethan Zuckerman: Civic Media ............................................................................................................................................. 79


384.
385.
386.
387.
388.
389.
390.
391.
392.
393.
394.
395.
396.
397.
398.
399.
400.
401.
402.
403.
404.
405.
406.
407.
408.
409.
410.
411.
412.
413.
414.
415.
416.
417.
418.
419.
420.
421.
422.
423.

Page xii

"Make the Breast Pump Not Suck!" Hackathon .................................................................................................................


Action Path ........................................................................................................................................................................
Civic Crowdfunding Research Project ...............................................................................................................................
Code4Rights ......................................................................................................................................................................
Codesign Toolkit ................................................................................................................................................................
Data Therapy .....................................................................................................................................................................
DataBasic ..........................................................................................................................................................................
DeepStream .......................................................................................................................................................................
Digital Humanitarian Marketplace ......................................................................................................................................
Erase the Border ................................................................................................................................................................
First Upload .......................................................................................................................................................................
FOLD .................................................................................................................................................................................
Framework for Consent Policies ........................................................................................................................................
Global Brands ....................................................................................................................................................................
Mapping the Globe ............................................................................................................................................................
Media Cloud .......................................................................................................................................................................
Media Cloud Brazil .............................................................................................................................................................
Media Meter .......................................................................................................................................................................
Media Meter Focus ............................................................................................................................................................
Media Perspective .............................................................................................................................................................
Mixed-Mode Systems in Disaster Response .....................................................................................................................
NetStories ..........................................................................................................................................................................
NewsPix .............................................................................................................................................................................
NGO2.0 ..............................................................................................................................................................................
Open Gender Tracker ........................................................................................................................................................
Open Water Project ...........................................................................................................................................................
Out for Change: Transformative Media Organizing Project ...............................................................................................
PageOneX .........................................................................................................................................................................
Promise Tracker ................................................................................................................................................................
Readersourcing .................................................................................................................................................................
Scanner Grabber ...............................................................................................................................................................
Student Legal Services for Innovation ...............................................................................................................................
Terra Incognita: 1000 Cities of the World ..........................................................................................................................
The Babbling Brook ...........................................................................................................................................................
The Effect of Gratitude Online ...........................................................................................................................................
VoIP Drupal .......................................................................................................................................................................
Vojo.co ...............................................................................................................................................................................
What We Watch .................................................................................................................................................................
Whose Voices? Twitter Citation in the Media ....................................................................................................................
ZL Vortice ..........................................................................................................................................................................

October 2015

79
79
80
80
80
80
80
81
81
81
81
81
81
82
82
82
82
82
82
83
83
83
83
83
84
84
84
84
84
85
85
85
85
85
85
86
86
86
86
86

MIT Media Lab

V. Michael Bove Jr.: Object-Based Media


Changing storytelling, communication, and everyday life through sensing, understanding,
and new interface technologies.

1.

2.

3D Telepresence
Chair

Daniel Novy

4K Comics

V. Michael Bove and Daniel Novy

An autostereoscopic (no glasses) 3D display engine is combined with a "Pepper's


Ghost" setup to create an office chair that appears to contain a remote meeting
participant. The system geometry is also suitable for other applications, such as
tabletop or automotive heads-up displays.

4K Comics applies the affordances of ultra-high-resolution screens to traditional print


media such as comic books, graphic novels, and other sequential art forms. The comic
panel becomes the entry point to the corresponding moment in the film adaptation,
while scenes from the film indicate the source frames of the graphic novel. The
relationship between comics, films, parodies, and other support materials can be
navigated using native touch screens, gestures, or novel wireless control devices. Big
data techniques are used to sift, store, and explore vast catalogs of long-running titles,
enabling sharing and remixing among friends, fans, and collectors.

3.

4.

Aerial Volumetric
Light-Field Display

V. Michael Bove, Daniel Novy and Henry Holtzman (Samsung NExD Lab)

Ambi-Blinds

V. Michael Bove and Ermal Dreshaj

An aerial volumetric light-field display via holographic light-shaping diffuser, projector


array, and sparse phase conjugate reflection.

Ambi-blinds are solar-powered, sunlight-driven window blinds. A reinvention of a


common household item, Ambi-blinds use the level of sunlight striking the window to
automatically control the tilt of the blinds, effectively controlling how much sunlight is
cast into a room depending on the time of day. Sleep studies dictate that waking up
with the sunlight regularly promotes wellness and quality of sleep, regulating our
circadian rhythm throughout the day. By automatically regulating the user's exposure to
sunlight, Ambi-blinds promote the well-being of the user in a non-invasive way, and
close at night to allow for privacy.

5.

BigBarChart

V. Michael Bove and Laura Perovich


BigBarChart is an immersive, 3D bar chart that provides a new physical way for people
to interact with data. It takes data beyond visualizations to map out a new area--data
experiences--that are multisensory, embodied, and aesthetic interactions. BigBarChart
is made up of a number of bars that extend up to 10 feet to create an immersive
experience. Bars change height and color in response to interactions that are direct (a
person entering the room), tangible (pushing down on a bar to get meta information), or
digital (controlling bars and performing statistical analyses through a tablet).
BigBarChart helps both scientists and the general public understand information from a
new perspective. Early prototypes are available.

6.

Bottles&Boxes:
Packaging with
Sensors

MIT Media Lab

Ermal Dreshaj and Daniel Novy


We have added inexpensive, low-power, wireless sensors to product packages to
detect user interactions with products. Thus, a bottle can register when and how often
its contents are dispensed (and generate side effects, like causing a music player to
play music when the bottle is picked up, or generating an automatic refill order when
near-emptiness is detected). A box can understand usage patterns of its contents.
Consumers can vote for their favorites among several alternatives simply by handling
them more often.

October 2015

Page 1

7.

Calliope

V. Michael Bove Jr., Edwina Portocarrero and Ye Wang


Calliope is the follow-up to the NeverEnding Drawing Machine. A portable, paper-based
platform for interactive story making, it allows physical editing of shared digital media at
a distance. The system is composed of a network of creation stations that seamlessly
blend analog and digital media. Calliope documents and displays the creative process
with no need to interact directly with a computer. By using human-readable tags and
allowing any object to be used as material for creation, it offers opportunities for
cross-cultural and cross-generational collaboration among peers with expertise in
different media.

8.

Consumer
Holo-Video

V. Michael Bove Jr., Bianca Datta, Ermal Dreshaj and Sundeep Jolly
The goal of this project, building upon work begun by Stephen Benton and the Spatial
Imaging group, is to create an inexpensive desktop monitor for a PC or game console
that displays holographic video images in real time, suitable for entertainment,
engineering, or medical imaging. To date, we have demonstrated the fast rendering of
holo-video images (including stereographic images, which, unlike ordinary stereograms,
have focusing consistent with depth information) from OpenGL databases on
off-the-shelf PC graphics cards; current research addresses new optoelectronic
architectures to reduce the size and manufacturing cost of the display system.
Alumni Contributors: James D. Barabas, Daniel Smalley and Quinn Y J Smithwick

9.

Dressed in Data

V. Michael Bove and Laura Perovich


This project steps beyond data visualizations to create data experiences. It aims to
engage not only the analytic mind, but also the artistic and emotional self. In this
project, chemicals found in people's bodies and homes are turned into a series of
fashions. Quantities, properties, and sources of chemicals are represented through
various parameters of the fashion, such as fabric color, textures, and sizes. Wearing
these outfits allows people to live the data to experience tangibly the findings from their
homes and bodies. This is the first project in a series of works that seek to create
aesthetic data experiences that prompt researchers and laypeople to engage with
information in new ways.

10.

DUSK

V. Michael Bove, Bianca Datta and Ermal Dreshaj


DUSK was created as part of the Media Lab's Advancing Wellbeing Initiative
(supported by the Robert Wood Johnson Foundation) to create private, restful spaces
for people in the workplace. DUSK promotes a vision of a new type of "nap pod," where
workers are encouraged to use the structure on a daily basis for regular breaks and
meditation. The user is provided with the much-needed privacy to take a phone call,
focus, or rest inside the pod for short periods during the day. The inside can be silent,
or filled by binaural beats audio; pitch black, or illuminated by a sunlamp; whatever
works for users to get the rest and relaxation needed to continue to be healthy and
productive. DUSK is created with a parametric press-fit design, making it scalable and
suitable for fabrication customizable on a per-user basis.

11.

Page 2

EmotiveModeler: An
Emotive Form Design
CAD Tool

V. Michael Bove and Philippa Mothersill


Whether or not we're experts in the design language of objects, we have an
unconscious understanding of the emotional character of their forms. EmotiveModeler
integrates knowledge about our emotive perception of shapes into a CAD tool that uses
descriptive adjectives as an input to aid both expert and novice designers in creating
objects that can communicate emotive character.

October 2015

MIT Media Lab

12.

13.

Everything Tells a
Story

V. Michael Bove Jr., David Cranor and Edwina Portocarrero

Guided-Wave Light
Modulator

V. Michael Bove Jr., Bianca Datta and Sunny Jolly

Following upon work begun in the Graspables project, we are exploring what happens
when a wide range of everyday consumer products can sense, interpret into human
terms (using pattern recognition methods), and retain memories, such that users can
construct a narrative with the aid of the recollections of the "diaries" of their sporting
equipment, luggage, furniture, toys, and other items with which they interact.

We are developing inexpensive, efficient, high-bandwidth light modulators based on


lithium niobate guided-wave technology. These modulators are suitable for demanding,
specialized applications such as holographic video displays, as well as other light
modulation uses such as compact video projectors.
Alumni Contributors: Daniel Smalley and Quinn Smithwick

14.

Holoshop

Paula Dawson, Masa Takatsuka, Hiroshi Yoshikawa, Brian Rogers, V. Michael


Bove Jr.
This project aims to make it easy to create 3D drawings that have the highly nuanced
qualities of handmade drawings. Typically, 2D drawing relies on the conjunction of the
friction and pressure of the medium (pencil and paper) to enable a sensitive registration
of the gesture. However, when drawing in 3D there is not necessarily a "support."
Holoshop software uses forces and magnetism of open and closed fields to enable the
user to locate fixed and semipermeable "supports" within the 3D environment.
Holoshop is being developed for use in conjunction with a haptic device, the Phantom,
enabling the user to navigate 3D space though both touch and vision. Also, the
real-time modulation of lines from velocity and pressure enable responsive drawings
which can be exported for holograms, 3D prints, and other 3D displays. This research is
supported under Australian Research Council's Discovery Projects funding scheme
(DP1094613).

15.

Infinity-by-Nine

V. Michael Bove Jr. and Daniel Novy


We are expanding the home-video viewing experience by generating imagery to extend
the TV screen and give the impression that the scene wraps completely around the
viewer. Optical flow, color analysis, and heuristics extrapolate beyond the screen edge,
where projectors provide the viewer's perceptual vision with low-detail dynamic patterns
that are perceptually consistent with the video imagery and increase the sense of
immersive presence and participation. We perform this processing in real time using
standard microprocessors and GPUs.

16.

17.

ListenTree:
Audio-Haptic Display
in the Natural
Environment

V. Michael Bove, Joseph A. Paradiso, Gershon Dublon and Edwina Portocarrero

Live Objects

V. Michael Bove, Arata Miyamoto and Valerio Panzica La Manna

ListenTree is an audio-haptic display embedded in the natural environment. Visitors to


our installation notice a faint sound emerging from a tree. By resting their heads against
the tree, they are able to hear sound through bone conduction. To create this effect, an
audio exciter transducer is weatherproofed and attached to the tree's roots,
transforming it into a living speaker, channeling audio through its branches, and
providing vibrotactile feedback. In one deployment, we used ListenTree to display live
sound from an outdoor ecological monitoring sensor network, bringing a faraway
wetland into the urban landscape. Our intervention is motivated by a need for forms of
display that fade into the background, inviting attention rather than requiring it. We
consume most digital information through devices that alienate us from our
surroundings; ListenTree points to a future where digital information might become
enmeshed in material.

A Live Object is a small device that can stream media content wirelessly to nearby
mobile devices without an Internet connection. Live Objects are associated with real
objects in the environment, such as an art piece in a museum, a statue in a public

MIT Media Lab

October 2015

Page 3

space, or a product in a store. Users exploring a space can discover nearby Live
Objects and view content associated with them, as well as leave comments for future
visitors. The mobile device retains a record of the media viewed (and links to additional
content), while the objects can retain a record of who viewed them. Future extensions
will look into making the system more social, exploring game applications such as
media scavenger hunts built on top of the platform, and incorporating other types of
media such as live and historical data from sensors associated with the objects.

18.

Narratarium

V. Michael Bove Jr., Fransheska Colon, Catherine Havasi, Katherine (Kasia)


Hayden, Daniel Novy, Jie Qi and Robert H. Speer
Narratarium augments printed and oral stories and creative play by projecting
immersive images and sounds. We are using natural language processing to listen to
and understand stories being told, and analysis tools to recognize activity among
sensor-equipped objects such as toys, then thematically augmenting the environment
using video and sound. New work addresses the creation and representation of
audiovisual content for immersive story experiences and the association of such
content with viewer context.

19.

20.

Networked
Playscapes: Dig Deep

V. Michael Bove and Edwina Portocarrero

Pillow-Talk

V. Michael Bove Jr., Edwina Portocarrero and David Cranor

Networked Playscapes re-imagine outdoor play by merging the flexibility and fantastical
of the digital world with the tangible, sensorial properties of physical play to create
hybrid interactions for the urban environment. Dig Deep takes the classic sandbox
found in children's playgrounds and merges it with the common fantasy of "digging your
way to the other side of the world" to create a networked interaction in tune with child
cosmogony.

Pillow-Talk is the first of a series of objects designed to aid creative endeavors through
the unobtrusive acquisition of unconscious, self-generated content to permit reflexive
self-knowledge. Composed of a seamless recording device embedded in a pillow, and
a playback and visualization system in a jar, Pillow-Talk crystallizes that which we
normally forget. This allows users to capture their dreams in a less mediated way,
aiding recollection by priming the experience and providing no distraction for recall and
capture through embodied interaction.

21.

22.

23.

Programmable
Synthetic
Hallucinations

V. Michael Bove and Daniel Novy

ShAir: A Platform for


Mobile Content
Sharing

Yosuke Bando, Daniel Dubois, Konosuke Watanabe, Arata Miyamoto, Henry


Holtzman, and V. Michael Bove

Slam Force Net

V. Michael Bove Jr., Santiago Alfaro and Daniel Novy

Creating consumer-grade appliances and authoring methodology that allow


hallucinatory phenomena to be programmed and utilized for information display and
narrative storytelling.

ShAir is a platform for instantly and easily creating local content-shareable spaces
without requiring an Internet connection or location information. ShAir-enabled devices
can opportunistically communicate with other mobile devices and optional pervasive
storage devices such as WiFi SD cards whenever they enter radio range of one
another. Digital content can hop through devices in the background without user
intervention. Applications that can be built on top of the platform include ad-hoc
photo/video/music sharing and distribution, opportunistic social networking and games,
digital business card exchange during meetings and conferences, and local news
article-sharing on trains and buses.

A basketball net incorporates segments of conductive fiber whose resistance changes


with degree of stretch. By measuring this resistance over time, hardware associated
with this net can calculate force and speed of a basketball traveling through the net.
Applications include training, toys that indicate the force and speed on a display, dunk

Page 4

October 2015

MIT Media Lab

competitions, and augmented-reality effects on television broadcasts. This net is far


less expensive and more robust than other approaches to measuring data about the
ball (e.g., photosensors or ultrasonic sensors) and the only physical change required for
the hoop or backboard is electrical connections to the net. Another application of the
material is a flat net that can measure velocity of a ball hit or pitched into it (as in
baseball or tennis); it can measure position as well (e.g., for determining whether a
practice baseball pitch would have been a strike).

24.

Smell Narratives

Carol Rozendo and V. Michael Bove


We are adding an olfactory dimension to storytelling in order to create more immersive
and evocative experiences. Smell Narratives allows the authoring of a "smell track"
involving individual or proportionally mixed fragrance components.
Alumni Contributor: Santiago Eloy Alfaro

25.

SurroundVision

V. Michael Bove Jr. and Santiago Alfaro


Adding augmented reality to the living-room TV, we are exploring the technical and
creative implications of using a mobile phone or tablet (and possibly also dedicated
devices like toys) as a controllable "second screen" for enhancing television viewing.
Thus, a viewer could use the phone to look beyond the edges of the television to see
the audience for a studio-based program, to pan around a sporting event, to take
snapshots for a scavenger hunt, or to simulate binoculars to zoom in on a part of the
scene. Recent developments include the creation of a mobile device app for Apple
products and user studies involving several genres of broadcast television
programming.

26.

Thermal Fishing Bob


NEW LISTING

V. Michael Bove, Laura Perovich, Don Blair and Sara Wiley (Northeastern
University)
Two of the most important traits of environmental hazards today are their invisibility and
the fact that they are experienced by communities, not just individuals. Yet we don't
have a good way to make hazards like chemical pollution visible and intuitive. The
thermal fishing bob seeks to visceralize rather than simply visualize data by creating a
data experience that makes water pollution data present. The bob measures water
temperature and displays that data by changing color in real time. Data is also logged
to be physically displayed elsewhere and can be further recorded using long-exposure
photos. Making environmental data experiential and interactive will help both
communities and researchers better understand pollution and its implications.

Ed Boyden: Synthetic Neurobiology


Revealing insights into the human condition and repairing brain disorders via novel tools
for mapping and fixing brain computations.

27.

Optogenetics and
Synthetic Biology
Tools

MIT Media Lab

Amy Chuong, Daniel Martin-Alarcon, Kate Adamala, Or Shemesh, Guangyu Xu,


Katriona Guthrie-Honea, Aimei Yang, Deniz Aksel, Ellena Popova
We have pioneered the development of fully genetically encoded reagents that, when
targeted to specific cells, enable their physiology to be controlled via light, as well as
other specific manipulations of cellular biological processes. Optogenetic tools enable
temporally precise control of neural electrical activity, cellular signaling, and other
high-speed physiological processes using light. Other tools we are developing enable
the control and monitoring of protein translation and other key cell biological processes.
Such tools are being explored throughout neuroscience and bioengineering, for the
study and repair of brain circuits. Derived from the natural world, these tools highlight
the power of ecological diversity, in yielding technologies for analyzing biological
complexity and addressing human health. We distribute these tools as freely as
possible.

October 2015

Page 5

28.

29.

Prototype Strategies
for Treating Brain
Disorders

Karen Buch, Stephanie Ku, Changyang Linghu, Giovanni Talei Franzesi, Christian
Wentz, Nir Grossman, Harbaljit Sohal, Bara Badwan

Tools for Mapping the


Molecular Structure
of the Brain

Daniel Oran, Pablo Valdes, Alex Wissner-Gross, Fei Chen, Linyi Gao, Sam
Rodriques, Paul Tillberg, Asmamaw "Oz" Wassie, Shahar Alon, Jae-Byum Chang,
Ru Wang, Yongxin Zhao, Manos Karagiannis, Adam Marblestone, Alexander
Clifton, Jeremy Wohlwend, Andrew Payn

New technologies for recording neural activity, controlling neural activity, or building
brain circuits, may be capable some day of serving in therapeutic roles for improving
the health of human patients: enabling the restoration of lost senses, the control of
aberrant or pathological neural dynamics, and the augmentation of neural circuit
computation, through prosthetic means. High throughput molecular and physiological
analysis methods may also open up new diagnostic possibilities. We are assessing,
often in collaborations with other groups, the translational possibilities opened up by our
technologies, exploring the safety and efficacy of our technologies in multiple animal
models, in order to discover potential applications of our tools to various clinically
relevant scenarios. New kinds of "brain co-processor" may be possible which can work
efficaciously with the brain to augment its computational abilities, e.g., in the context of
cognitive, emotional, sensory, or motor disability.

Brain circuits are large, 3D structures. However, the building blocks proteins, signaling
complexes, synapses are organized with nanoscale precision. This presents a
fundamental tension in neuroscience: to understand a neural circuit, you might need to
map a large diversity of nanoscale building blocks, across an extended spatial expanse.
We are developing a new suite of tools that enable the mapping of the location and
identity of the molecular building blocks of the brain, so that comprehensive taxonomies
of cells, circuits, and computations might someday become possible, even in entire
brains. One of the technologies we are developing enables large, 3D objects to be
imaged with nanoscale precision, by physically expanding the sample a tool we call
expansion microscopy (ExM). We are working to improve expansion microscopy
further, and are working, often in interdisciplinary collaborations, on a suite of new
labeling and analysis techniques to enable multiplexed readout.

30.

Tools for Recording


High-Speed Brain
Dynamics

Moshe Ben-Ezra, Jun Deguchi, Caroline Moore-Kochlacs, Jake Bernstein, Ishan


Gupta, Mike Henninger, Nikita Pak, Young Gyu Yoon, Justin Kinney, Suhasa
Kodandaramaiah, Kiryl Piatkevich, Jorg Scholvin, Nickolaos Savidis, Semon
Rezchikov
The brain is a three-dimensional, densely wired circuit that computes via large sets of
widely distributed neurons interacting at fast timescales. Ideally it would be possible to
observe the activity of many neurons with as great a degree of precision as possible, so
as to understand the neural codes and dynamics that are produced by the circuits of
the brain. Our lab and our collaborators are developing a number of innovations to
enable such analyses. These tools will hopefully enable pictures of how neurons work
together to implement brain computations, and how these computations go awry in
brain disorders. Such neural observation strategies may also serve as detailed
biomarkers of brain disorders or indicators of potential drug side effects. These
technologies may, in conjunction with optogenetics, enable closed-loop neural control
technologies, which can introduce information into the brain as a function of brain state
("brain co-processors").

31.

Page 6

Understanding
Normal and
Pathological Brain
Computations

Brian Allen, Ho-Jun Suk, Jay Yu, Limor Freifeld, Erica (Eunjung) Jung, Annabelle
Singer, Demian Park, Ingrid Van Welie, Bettina Arkhurst, Eunice Wu
We are providing our tools to the community, and also using them within our lab, to
analyze how specific brain mechanisms (molecular, cellular, circuit-level) give rise to
behaviors and pathological states. These studies may yield fundamental insights into
how best to go about treating brain disorders.

October 2015

MIT Media Lab

Cynthia Breazeal: Personal Robots


Building socially engaging robots and interactive technologies to help people live healthier
lives, connect with others, and learn better.

32.

33.

34.

AIDA: Affective
Intelligent Driving
Agent

Cynthia Breazeal and Kenton Williams

Animal-Robot
Interaction

Brad Knox, Patrick Mccabe and Cynthia Breazeal

Cloud-HRI

Cynthia Breazeal, Nicholas DePalma, Adam Setapen and Sonia Chernova

Drivers spend a significant amount of time multi-tasking while they are behind the
wheel. These dangerous behaviors, particularly texting while driving, can lead to
distractions and ultimately to accidents. Many in-car interfaces designed to address this
issue still neither take a proactive role to assist the driver nor leverage aspects of the
driver's daily life to make the driving experience more seamless. In collaboration with
Volkswagen/Audi and the SENSEable City Lab, we are developing AIDA (Affective
Intelligent Driving Agent), a robotic driver-vehicle interface that acts as a sociable
partner. AIDA elicits facial expressions and strong non-verbal cues for engaging social
interaction with the driver. AIDA also leverages the driver's mobile device as its face,
which promotes safety, offers proactive driver support, and fosters deeper
personalization to the driver.

Like people, dogs and cats live among technologies that affect their lives. Yet little of
this technology has been designed with pets in mind. We are developing systems that
interact intelligently with animals to entertain, exercise, and empower them. Currently,
we are developing a laser-chasing game, in which dogs or cats are tracked by a
ceiling-mounted webcam, and a computer-controlled laser moves with knowledge of the
pet's position and movement. Machine learning will be applied to optimize the specific
laser strategy. We envision enabling owners to initiate and view the interaction remotely
through a web interface, providing stimulation and exercise to pets when the owners
are at work or otherwise cannot be present.

Imagine opening your eyes and being awake for only half an hour at a time. This is the
life that robots traditionally live. This is due to a number of factors, such as battery life
and wear on prototype joints. Roboticists have typically muddled though this challenge
by crafting handmade perception and planning models of the world, or by using
machine learning with synthetic and real-world data, but cloud-based robotics aims to
marry large distributed systems with machine learning techniques to understand how to
build robots that interpret the world in a richer way. This movement aims to build
large-scale machine learning algorithms that use experiences from large groups of
people, whether sourced from a large number of tabletop robots or a large number of
experiences with virtual agents. Large-scale robotics aims to change embodied AI as it
changed non-embodied AI.

35.

DragonBot: Android
Phone Robots for
Long-Term HRI

MIT Media Lab

Adam Setapen, Natalie Freed, and Cynthia Breazeal


DragonBot is a new platform built to support long-term interactions between children
and robots. The robot runs entirely on an Android cell phone, which displays an
animated virtual face. Additionally, the phone provides sensory input (camera and
microphone) and fully controls the actuation of the robot (motors and speakers). Most
importantly, the phone always has an Internet connection, so a robot can harness
cloud-computing paradigms to learn from the collective interactions of multiple robots.
To support long-term interactions, DragonBot is a "blended-reality" character: if you
remove the phone from the robot, a virtual avatar appears on the screen and the user
can still interact with the virtual character on the go. Costing less than $1,000,
DragonBot was specifically designed to be a low-cost platform that can support
longitudinal human-robot interactions "in the wild."

October 2015

Page 7

36.

Global Literacy
Tablets

Cynthia Breazeal, David Nunez, Tinsley Galyean, Maryanne Wolf (Tufts), and
Robin Morris (GSU)
We are developing a system of early literacy apps, games, toys, and robots that will
triage how children are learning, diagnose literacy deficits, and deploy dosages of
content to encourage app play using a mentoring algorithm that recommends an
appropriate activity given a child's progress. Currently, over 200 Android-based tablets
have been sent to children around the world; these devices are instrumented to provide
a very detailed picture of how kids are using these technologies. We are using this big
data to discover usage and learning models that will inform future educational
development.

37.

Huggable: A Social
Robot for Pediatric
Care

Boston Children's Hospital, Northeastern University, Cynthia Breazeal, Sooyeon


Jeong, Fardad Faridi and Jetta Company
Children and their parents may undergo challenging experiences when admitted for
inpatient care at pediatric hospitals. While most hospitals make efforts to provide
socio-emotional support for patients and their families during care, gaps still exist
between human resource supply and demand. The Huggable project aims to close this
gap by creating a social robot able to mitigate stress, anxiety, and pain in pediatric
patients by engaging them in playful interactions. In collaboration with Boston
Children's Hospital and Northeastern University, we are currently running an
experimental study to compare the effects of the Huggable robot to a virtual character
on a screen and a plush teddy bear. We demonstrated preliminarily that children are
more eager to emotionally connect with and be physically activated by a robot than a
virtual character, illustrating the potential of social robots to provide socio-emotional
support during inpatient pediatric care.
Alumni Contributors: Sigurdur Orn Adalgeirsson, Kristopher Bernardo Dos Santos and
Walter Dan Stiehl

38.

LightSwarm

Charles Rose, Cynthia Breazeal, Palash Nandy, Hiram Moncivias and Kyubok Lee
LightSwarm is a platform for interaction between humans and swarm robots. Swarm
robots are implemented as augmented-reality agents that communicate and interact
through movements. While each robot is simple, the aggregate shows complex
behavior. In this way, LightSwarm invites one to think of the mind as a loose consensus
of a society of agents, which allows for more nuanced interactions.

39.

40.

Page 8

Mind-Theoretic
Planning for Robots

Cynthia Breazeal and Sigurdur Orn Adalgeirsson

Robot Learning from


Human-Generated
Rewards

Brad Knox, Robert Radway, Tom Walsh, and Cynthia Breazeal

Mind-Theoretic Planning (MTP) is a technique for robots to plan in social domains. This
system takes into account probability distributions over the initial beliefs and goals of
people in the environment that are relevant to the task, and creates a prediction of how
they will rationally act on their beliefs to achieve their goals. The MTP system then
proceeds to create an action plan for the robot that simultaneously takes advantage of
the effects of anticipated actions of others and also avoids interfering with them.

To serve us well, robots and other agents must understand our needs and how to fulfill
them. To that end, our research develops robots that empower humans by interactively
learning from them. Interactive learning methods enable technically unskilled end-users
to designate correct behavior and communicate their task knowledge to improve a
robot's task performance. This research on interactive learning focuses on algorithms
that facilitate teaching by signals of approval and disapproval from a live human trainer.
We operationalize these feedback signals as numeric rewards within the
machine-learning framework of reinforcement learning. In comparison to the
complementary form of teaching by demonstration, this feedback-based teaching may
require less task expertise and place less cognitive load on the trainer. Envisioned
applications include human-robot collaboration and assistive robotic devices for
handicapped users, such as myolectrically controlled prosthetics.

October 2015

MIT Media Lab

41.

Robotic Language
Learning
Companions

Cynthia Breazeal, Jacqueline Kory Westlund, Sooyeon Jeong, Paul Harris, Dave
DeSteno, and Leah Dickens
Young children learn language not through listening alone, but through active
communication with a social actor. Cultural immersion and context are also key in
long-term language development. We are developing robotic conversational partners
and hybrid physical/digital environments for language learning. For example, the robot
Sophie helped young children learn French through a food-sharing game. The game
was situated on a digital tablet embedded in a caf table. Sophie modeled how to order
food and as the child practiced the new vocabulary, the food was delivered via digital
assets onto the table's surface. A teacher or parent can observe and shape the
interaction remotely via a digital tablet interface to adjust the robot's conversation and
behavior to support the learner. More recently, we have been examining how social
nonverbal behaviors impact children's perceptions of the robot as an informant and
social companion.
Alumni Contributors: Natalie Anne Freed and Adam Michael Setapen

42.

Robotic Learning
Companions

Cynthia Breazeal, Jacqueline Kory Westlund, and Samuel Spaulding


The language and literacy skills of young children entering school are highly predictive
of their long-term academic success. Children from low-income families are particularly
at risk. Parents often work multiple jobs, giving them less time to talk to and read with
their children. Parents might be illiterate or not speak the language taught in local
schools, and they may not have been read to as children, providing less experience of
good co-reading practice to draw upon. We are currently developing a robotic reading
companion for young children, trained by interactive demonstrations from parents
and/or educational experts. We intend for this robot to complement parental interaction
and emulate some of their best practices in co-reading, building language and literacy
through asking comprehension questions, prompting exploration, and simply being
emotionally involved in the child's reading experience.
Alumni Contributors: Brad Knox and David Nunez

43.

44.

SHARE:
Understanding and
Manipulating
Attention Using
Social Robots

Cynthia Breazeal and Nick DePalma

Socially Assistive
Robotics: An NSF
Expedition in
Computing

Tufts University, University of Southern California, Kasia Hayden with Stanford


University, Cynthia Breazeal, Edith Ackermann, Goren Gordon, Michal Gordon,
Sooyeon Jeong, Jacqueline Kory, Jin Joo Lee, Luke Plummer, Samuel Spaulding,
Willow Garage and Yale University

SHARE is a robotic cognitive architecture focused on manipulating and understanding


the phenomenon of shared attention during interaction. SHARE incorporates new
findings and research in the understanding of nonverbal referential gesture, visual
attention system research, and interaction science. SHARE's research incorporates
new measurement devices, advanced artificial neural circuits, and a robot that makes
its own decisions.

Our mission is to develop the computational techniques that will enable the design,
implementation, and evaluation of "relational" robots, in order to encourage social,
emotional, and cognitive growth in children, including those with social or cognitive
deficits. Funding for the project comes from the NSF Expeditions in Computing
program. This expedition has the potential to substantially impact the effectiveness of
education and healthcare, and to enhance the lives of children and other groups that
require specialized support and intervention. In particular, the MIT effort is focusing on
developing second-language learning companions for pre-school aged children,
ultimately for ESL (English as a Second Language).
Alumni Contributors: Catherine Havasi and Brad Knox

MIT Media Lab

October 2015

Page 9

45.

Tega: A New Robot


Platform for
Long-Term
Interaction

Cooper Perkins Inc., Fardad Faridi, Cynthia Breazeal, Jin Joo Lee, Luke Plummer,
IFRobots and Stacey Dyer
Tega is a new robot platform for long-term interactions with children. The robot
leverages smart phones to graphically display facial expressions. Smart phones are
also used for computational needs, including behavioral control, sensor processing, and
motor control to drive its five degrees of freedom. To withstand long-term continual use,
we have designed an efficient battery-powered system that can potentially run for up to
six hours before needing to be charged. We also designed for more robust and reliable
actuator movements so that the robot can express consistent and expressive behaviors
over long periods of time. Through its small size and furry exterior, the robot is
aesthetically designed for children. We aim to field test the robot's ability to work
reliably in out-of-lab environments and engage young children in educational activities.
Alumni Contributor: Kris Dos Santos

46.

TinkRBook:
Reinventing the
Reading Primer

Cynthia Breazeal, Angela Chang, and David Nunez


TinkRBook is a storytelling system that introduces a new concept of reading, called
textual tinkerability. Textual tinkerability uses storytelling gestures to expose the
text-concept relationships within a scene. Tinkerability prompts readers to become
more physically active and expressive as they explore concepts in reading together.
TinkRBooks are interactive storybooks that prompt interactivity in a subtle way,
enhancing communication between parents and children during shared picture-book
reading. TinkRBooks encourage positive reading behaviors in emergent literacy:
parents act out the story to control the words onscreen, demonstrating print referencing
and dialogic questioning techniques. Young children actively explore the abstract
relationship between printed words and their meanings, even before this relationship is
properly understood. By making story elements alterable within a narrative, readers can
learn to read by playing with how word choices impact the storytelling experience.
Recently, this research has been applied in developing countries.
Alumni Contributor: Angela Chang

Hugh Herr: Biomechatronics


Enhancing human physical capability.

47.

Page 10

Artificial
Gastrocnemius

Hugh Herr and Ken Endo


Human walking neuromechanical models show how each muscle works during normal,
level-ground walking. They are mainly modeled with clutches and linear springs, and
are able to capture dominant normal walking behavior. This suggests to us to use a
series-elastic clutch at the knee joint for below-knee amputees. We have developed the
powered ankle prosthesis, which generates enough force to enable a user to walk
"normally." However, amputees still have problems at the knee joint due to the lack of
gastrocnemius, which works as an ankle-knee flexor and a plantar flexor. We
hypothesize that metabolic cost and EMG patterns of an amputee with our powered
ankle and virtual gastrocnemius will dramatically improve.

October 2015

MIT Media Lab

48.

Biomimetic Active
Prosthesis for
Above-Knee
Amputees

Hugh Herr, Elliott Rouse and Luke Mooney


Using biologically inspired design principles, a biomimetic robotic knee prosthesis is
proposed that uses a clutchable series-elastic actuator. In this design, a clutch is placed
in parallel to a combined motor and spring. This architecture permits the mechanism to
provide biomimetic walking dynamics while requiring minimal electromechanical energy
from the prosthesis. The overarching goal for this project is to design a new generation
of robotic knee prostheses capable of generating significant energy during level-ground
walking, that can be stored in a battery and used to power a robotic ankle prosthesis
and other net-positive locomotion modes (e.g., stair ascent).
Alumni Contributor: Ernesto C. Martinez-Villalpando

49.

Control of
Muscle-Actuated
Systems via
Electrical Stimulation

Hugh Herr
Motivated by applications in rehabilitation and robotics, we are developing
methodologies to control muscle-actuated systems via electrical stimulation. As a
demonstration of such potential, we are developing centimeter-scale robotic systems
that utilize muscle for actuation and glucose as a primary source of fuel. This is an
interesting control problem because muscles: a) are mechanical state-dependent
actuators; b) exhibit strong nonlinearities; and c) have slow time-varying properties due
to fatigue-recuperation, growth-atrophy, and damage-healing cycles. We are
investigating a variety of adaptive and robust control techniques to enable us to achieve
trajectory tracking, as well as mechanical power-output control under sustained
oscillatory conditions. To implement and test our algorithms, we developed an
experimental capability that allows us to characterize and control muscle in real time,
while imposing a wide variety of dynamical boundary conditions.
Alumni Contributor: Waleed A. Farahat

50.

51.

Dancing Control
System for Bionic
Ankle Prosthesis

Hugh Herr, Bevin Lin, Elliott Rouse, Nathan Villagaray-Carski and Robert
Emerson

Effect of a Powered
Ankle on Shock
Absorption and
Interfacial Pressure

Hugh Herr and David Hill

MIT Media Lab

Professional ballroom dancer Adrianne Haslet-Davis lost her natural ability to dance
when her left leg was amputated below the knee following the Boston Marathon
bombings in April 2013. Hugh Herr was introduced to Adrianne while meeting with
bombing survivors at Boston's Spaulding Rehabilitation Hospital. For Professor Herr,
this meeting generated a research challenge: build Adrianne a bionic ankle prosthesis,
and restore her ability to dance. The research team for this project spent some 200
days studying the biomechanics of dancing and designing the bionic technology based
on their investigations. The control system for Adrianne was implemented on a
customized BiOM bionic ankle prosthesis.

Lower-extremity amputees face a series of potentially serious post-operative


complications. Among these are increased risk of further amputations, excessive stress
on the unaffected and residual limbs, and discomfort at the human-prosthesis interface.
Currently, conventional, passive prostheses have made strides towards alleviating the
risk of experiencing complications, but we believe that the limit of "dumb" elastic
prostheses has been reached; in order to make further strides we must integrate
"smart" technology in the form of sensors and actuators into lower-limb prostheses.
This project compares the elements of shock absorption and socket pressure between
passive and active ankle-foot prostheses. It is an attempt to quantitatively evaluate the
patient's comfort.

October 2015

Page 11

52.

FitSocket:
Measurement for
Attaching Objects to
People

Arthur Petron, Hugh Herr and Neri Oxman


A better understanding of the biomechanics of human tissue allows for better
attachment of load-bearing objects to people. Think of shoes, ski boots, car seats,
orthotics, and more. We are focusing on prosthetic sockets, the cup-shaped devices
that attach an amputated limb to a lower-limb prosthesis, which currently are made
through unscientific, artisanal methods that do not have repeatable quality and comfort
from one individual to the next. The FitSocket project aims to identify the correlation
between leg tissue properties and the design of a comfortable socket. The FitSocket is
a robotic socket measurement device that directly measures tissue properties. With
these data, we can rapid-prototype test sockets and socket molds in order to make
rigid, spatially variable stiffness, and spatially/temporally variable stiffness sockets.
Alumni Contributor: Elizabeth Tsai

53.

54.

FlexSEA: Flexible,
Scalable Electronics
Architecture for
Wearable Robotics
Applications

Hugh Herr and Jean-Francois Duval

Human Walking
Model Predicts Joint
Mechanics,
Electromyography,
and Mechanical
Economy

Hugh Herr, Matthew Furtney and Stanford Research Institute

This project aims to enable fast prototyping of a multi-axis and multi-joint active
prosthesis by developing a new modular electronics system. This system provides the
required hardware and software to do precise motion control, data acquisition, and
networking. Scalability is obtained by the use of a fast industrial communication
protocol between the modules, and by a standardization of the peripherals' interfaces: it
is possible to add functionalities to the system simply by plugging additional cards.
Hardware and software encapsulation is used to provide high-performance, real-time
control of the actuators while keeping the high-level algorithmic development and
prototyping simple, fast, and easy.

We are studying the mechanical behavior of leg muscles and tendons during human
walking in order to motivate the design of power-efficient robotic legs. The Endo-Herr
walking model uses only three actuators (leg muscles) to power locomotion. It uses
springs and clutches in place of other essential tendons and muscles to store energy
and transfer energy from one joint to another during walking. Since mechanical clutches
require much less energy than electric motors, this model can be used to design highly
efficient robotic legs and exoskeletons. Current work includes analysis of the model at
variable walking speeds and informing design specifications for a collaborative
"SuperFlex" exosuit project.
Alumni Contributor: Ken Endo

55.

Page 12

Load-Bearing
Exoskeleton for
Augmentation of
Human Running

Hugh Herr, Grant Elliott and Andrew Marecki


Augmentation of human locomotion has proved an elusive goal. Natural human walking
is extremely efficient, and the complex articulation of the human leg poses significant
engineering difficulties. We present a wearable exoskeleton designed to reduce the
metabolic cost of jogging. The exoskeleton places a stiff fiberglass spring in parallel
with the complete leg during stance phase, then removes it so that the knee may bend
during leg swing. The result is a bouncing gait with reduced reliance on the musculature
of the knee and ankle.

October 2015

MIT Media Lab

56.

57.

Neural Interface
Technology for
Advanced Prosthetic
Limbs

Edward Boyden, Hugh Herr, Ron Riso and Katherine Song

Powered Ankle-Foot
Prosthesis

Hugh Herr

Recent advances in artificial limbs have resulted in the provision of powered ankle and
knee function for lower extremity amputees and powered elbow, wrist, and finger joints
for upper extremity prostheses. Researchers still struggle, however, with how to provide
prosthesis users with full volitional and simultaneous control of the powered joints. This
project seeks to develop means to allow amputees to control their powered prostheses
by activating the peripheral nerves present in their residual limb. Such neural control
can be more natural than currently used myoelectric control, since the same functions
previously served by particular motor fascicles can be directed to the corresponding
prosthesis actuators for simultaneous joint control, as in normal limbs. Future plans
include the capability to electrically activate the sensory components of residual limb
nerves to provide amputees with tactile feedback and an awareness of joint position
from their prostheses.

The human ankle provides a significant amount of net positive work during the stance
period of walking, especially at moderate to fast walking speeds. Conversely,
conventional ankle-foot prostheses are completely passive during stance, and
consequently, cannot provide net positive work. Clinical studies indicate that transtibial
amputees using conventional prostheses experience many problems during
locomotion, including a high gait metabolism, a low gait speed, and gait asymmetry.
Researchers believe the main cause for the observed locomotion is due to the inability
of conventional prostheses to provide net positive work during stance. The objective of
this project is to develop a powered ankle-foot prosthesis that is capable of providing
net positive work during the stance period of walking. To this end, we are investigating
the mechanical design and control system architectures for the prosthesis. We are also
conducting a clinical evaluation of the proposed prosthesis on different amputee
participants.
Alumni Contributor: Samuel Au

58.

59.

60.

Sensor-Fusions for
an EMG Controlled
Robotic Prosthesis

Matthew Todd Farrell and Hugh Herr

Tethered Robotic
System for
Understanding
Human Movements

Hugh Herr and Jiun-Yih Kuan

Variable-Impedance
Prosthetic (VIPr)
Socket Design

Hugh Herr, Arthur J Petron, Bryan Ranger and David Sengeh

MIT Media Lab

Current unmotorized prostheses do not provide adequate energy return during late
stance to improve level-ground locomotion. Robotic prostheses can provide power
during late-stance to improve metabolic economy in an amputee during level-ground
walking. This project seeks to improve the types of terrain a robotic ankle can
successfully navigate by using command signals taken from the intact and residual
limbs of an amputee. By combining these command signals with sensors attached to
the robotic ankle, it might be possible to further understand the role of physiological
signals in the terrain adaptation of robotic ankles.

The goal of this project is to build a powerful system as a scientific tool for bridging the
gap in the literature by determining the dynamic biomechanics of the lower-limb joints
and metabolic effects of physical interventions during natural locomotion. This system is
meant for use in applying forces to the human body and measuring the force,
displacement, and other physiological properties simultaneously, helping investigate
controllability and efficacy of mechanical devices physically interacting with a human
subject.

Today, 100 percent of amputees experience some form of prosthetic socket discomfort.
This project involves the design and production of a comfortable, variable impedance
prosthetic (VIPr) socket using digital anatomical data for a transtibial amputee using
computer-aided design and manufacturing (CAD/CAM). The VIPr socket uses multiple
materials to achieve compliance, thereby increasing socket comfort for amputees, while
maintaining structural integrity. The compliant features are seamlessly integrated into
the 3D-printed socket to achieve lower interface peak pressures over bony

October 2015

Page 13

protuberances and other anatomical points in comparison to a conventional socket.


This lower peak pressure is achieved through a design that uses anthropomorphic data
acquired through surface scan and Magnetic Resonance Imaging techniques. A
mathematical transformation maps the quantitative measurements of the human
residual limb to the corresponding socket shape and impedance characteristics,
spatially.

61.

Volitional Control of a
Powered Ankle-Foot
Prosthesis

Hugh Herr and Oliver Kannape


This project focuses on giving transtibial amputees volitional control over their
prostheses by combining electromyographic (EMG) activity from the amputees' residual
limb muscles with intrinsic controllers on the prosthesis. The aim is to generalize
biomimetic behavior of the prosthesis, making it independent of walking terrains and
transitions.

Cesar Hidalgo: Macro Connections


Transforming data into knowledge.

62.

Collective Memory

Cesar A. Hidalgo, C. Jara-Figueroa, and Amy Yu


Collective memory is formed from the information that our species imbues in both
humans and objects. We encode this information as order within physical systems such
as media technologies, which allows us to transmit and preserve collective memory to
our posterity. We use the biographies of 11,341 memorable people that comprise the
Pantheon dataset to study how changes in the systems used to store information affect
the quantity and composition of our species' collective memory. We find that changes in
media technology such as the printing press, the industrial revolution, the telegraph,
and television mark milestones within the evolution of the composition of our collective
memory; composition that is directly affected by the predominant communication
technology of the time. We also find that these milestones mark changes in the quantity
of information from each period that makes up our collective memory.

63.

64.

Data Visualization:
The Pixel Factory

Cesar A. Hidalgo and Macro Connections group

DIVE

Cesar A. Hidalgo, Kevin Zeng Hu and Gurubaven Mahendran

The rise of computational methods has generated a new natural resource: data. While
it's unclear if big data will open up trillion-dollar markets, it is clear that making sense of
data isn't easy, and that data visualizations are essential to squeeze meaning out of
data. But the capacity to create data visualizations is not widespread; to help develop it
we introduce the Pixel Factory, a new initiative focusing on the creation of
data-visualization resources and tools in collaboration with corporate members. Our
goals are to create software resources for development of online data-visualization
platforms that work with any type of data; and to create these resources as a means to
learn. The most valuable outcome of this work will not be the software resources
produced, incredible as these could be, but the generation of people with the capacity
to make these resources.

DIVE is a platform for efficiently generating web-based, interactive visualizations or


statistical analyses of structured data sets. DIVE aims to lower the barrier to
understanding data so users can focus on interpreting results, not technical minutiae.

Page 14

October 2015

MIT Media Lab

65.

FOLD

Alexis Hope, Kevin Hu, Joe Goldbeck, Nathalie Huynh, Matthew Carroll, Cesar A.
Hidalgo, Ethan Zuckerman
FOLD is an authoring and publishing platform for creating modular, multimedia stories.
Some readers require greater context to understand complex stories. Using FOLD,
authors can search for and add "context cards" to their stories. Context cards can
contain videos, maps, tweets, music, interactive visualizations, and more. FOLD also
allows authors to link stories together by remixing context cards created by other
writers.

66.

GIFGIF

Cesar A. Hidalgo, Andrew Lippman, Kevin Zeng Hu and Travis Rich


An animated GIF is a magical thing. It has the power to compactly convey emotion,
empathy, and context in a subtle way that text or emoticons often miss. GIFGIF is a
project to combine that magic with quantitative methods. Our goal is to create a tool
that lets people explore the world of GIFs by the emotions they evoke, rather than by
manually entered tags. A web site with 200,000 users maps the GIFs to an emotion
space and lets you peruse them interactively.

67.

Immersion

Deepak Jagdish, Daniel Smilkov and Cesar Hidalgo


Immersion is a visual data experiment that delivers a fresh perspective of your email
inbox. Focusing on a people-centric approach rather than the content of the emails,
Immersion brings into view an important personal insight the network of people you are
connected to via email, and how it evolves over the course of many years. Given that
this experiment deals with data that is extremely private, it is worthwhile to note that
when given secure access to your Gmail inbox (which you can revoke any time),
Immersion only uses data from email headers and not a single word of any email's
subject or body content.

68.

Opus

Cesar A. Hidalgo and Miguel Guevara


Opus is an online tool exploring the work and trajectory of scholars. Through a suite of
interactive visualizations, Opus help users explore the academic impact of a scholar's
publications, discover her network of collaborators, and identify her peers.

69.

Pantheon

Ali Almossawi, Andrew Mao, Defne Gurel, Cesar A. Hidalgo, Kevin Zeng Hu,
Deepak Jagdish, Amy Yu, Shahar Ronen and Tiffany Lu
We were not born with the ability to fly, cure disease, or communicate at long distances,
but we were born in a society that endows us with these capacities. These capacities
are the result of information that has been generated by humans and that humans have
been able to embed in tangible and digital objects. This information is all around us: it's
the way in which the atoms in an airplane are arranged or the way in which our
cellphones whisper dance instructions to electromagnetic waves. Pantheon is a project
celebrating the cultural information that endows our species with these fantastic
capacities. To celebrate our global cultural heritage, we are compiling, analyzing, and
visualizing datasets that can help us understand the process of global cultural
development.

70.

Place Pulse

Phil Salesses, Anthony DeVincenzi, and Csar A. Hidalgo


Place Pulse is a website that allows anybody to quickly run a crowdsourced study and
interactively visualize the results. It works by taking a complex question, such as
"Which place in Boston looks the safest?" and breaking it down into easier-to-answer
binary pairs. Internet participants are given two images and asked "Which place looks
safer?" From the responses, directed graphs are generated and can be mined, allowing
the experimenter to identify interesting patterns in the data and form new hypotheses
based on their observations. It works with any city or question and is highly scalable.
With an increased understanding of human perception, it should be possible for
calculated policy decisions to have a disproportionate impact on public opinion.

MIT Media Lab

October 2015

Page 15

71.

StreetScore

Nikhil Naik, Jade Philipoom, Ramesh Raskar, Cesar Hidalgo


StreetScore is a machine learning algorithm that predicts the perceived safety of a
streetscape. StreetScore was trained using 2,920 images of streetscapes from New
York and Boston and their rankings for perceived safety obtained from a crowdsourced
survey. To predict an image's score, StreetScore decomposes this image into features
and assigns the image a score based on the associations between features and scores
learned from the training dataset. We use StreetScore to create a collection of map
visualizations of perceived safety of street views from cities in the United States.
StreetScore allows us to scale up the evaluation of streetscapes by several orders of
magnitude when compared to a crowdsourced survey. StreetScore can empower
research groups working on connecting urban perception with social and economic
outcomes by providing high resolution data on urban perception.

72.

73.

74.

75.

Page 16

The Economic
Complexity
Observatory

Alex Simoes and Csar A. Hidalgo

The Language Group


Network

Shahar Ronen, Kevin Hu, Michael Xu, and Csar A. Hidalgo

The Network Impact


in Success

Cesar A. Hidalgo and Miguel Guevara

The Privacy Bounds


of Human Mobility

Cesar A. Hidalgo and Yves-Alexandre DeMontjoye

With more than six billion people and 15 billion products, the world economy is anything
but simple. The Economic Complexity Observatory is an online tool that helps people
explore this complexity by providing tools that can allow decision makers to understand
the connections that exist between countries and the myriad of products they produce
and/or export. The Economic Complexity Observatory puts at everyone's fingertips the
latest analytical tools developed to visualize and quantify the productive structure of
countries and their evolution.

Most interactions between cultures require overcoming a language barrier, which is why
multilingual speakers play an important role in facilitating such interactions. In addition,
certain languages (not necessarily the most spoken ones) are more likely than others to
serve as intermediary languages. We present the Language Group Network, a new
approach for studying global networks using data generated by tens of millions of
speakers from all over the world: a billion tweets, Wikipedia edits in all languages, and
translations of two million printed books. Our network spans over eighty languages, and
can be used to identify the most connected languages and the potential paths through
which information diffuses from one culture to another. Applications include promotion
of cultural interactions, prediction of trends, and marketing.

Diverse teams of authors are known to generate higher-impact research papers, as


measured by their number of citations. But is this because cognitively diverse teams
produce higher quality work, which is more likely to get cited and adopted? Or is it
because they possess a larger number of social connections through which to distribute
their findings? In this project we are mapping the co-authorship networks and the
academic diversity of the authors in a large volume of scientific publications to test
whether the adoption of papers is explained by cognitive diversity or the size of the
network associated with each of these authors. This project will help us understand
whether the larger levels of adoption of work generated by diverse groups is the result
of higher quality, or better connections.

We used 15 months of data from 1.5 million people to show that four
points--approximate places and times--are enough to identify 95 percent of individuals
in a mobility database. Our work shows that human behavior puts fundamental natural
constraints on the privacy of individuals, and these constraints hold even when the
resolution of the dataset is low. These results demonstrate that even coarse datasets
provide little anonymity. We further developed a formula to estimate the uniqueness of
human mobility traces. These findings have important implications for the design of
frameworks and institutions dedicated to protecting the privacy of individuals.

October 2015

MIT Media Lab

Hiroshi Ishii: Tangible Media


Seamlessly coupling the worlds of bits and atoms by giving dynamic physical form to
digital information and computation.

76.

bioLogic

Lining Yao
This research introduces bacterial endospores as microscale biological actuators to
build stimuli-responsive, shape-changing interfaces. We demonstrate the unique
programmability of spore actuators for achieving transformations on human scale
surfaces through precise spore deposition and geometric pattern designs. We describe
the process from wet-lab spore cultivation to machine-shop printing spore solutions, as
well as exemplifying their applications. This research intends to contribute to the
understanding of the control and programming of natural materials. We hope to
encourage interface designers to consider adapting living organisms to build actuation
mechanisms and to help grow the nascent field of applying biological technologies to
HCI.

77.

inFORM

Hiroshi Ishii, Alex Olwal, Daniel Leithinger and Sean Follmer


Shape displays can be used to render both 3D physical content and user interface
elements. We propose to use shape displays in three different ways to mediate
interaction: facilitate, providing dynamic physical affordances through shape change;
restrict, guiding users through dynamic physical constraints; and manipulate, actuating
passive physical objects on the interface surface. We demonstrate this on a new,
high-resolution shape display.

78.

79.

jamSheets:
Interacting with Thin
Stiffness-Changing
Material

Jifei Ou, Lining Yao, Daniel Tauber, Juergen Steimle, Ryuma Niiyama, Hiroshi
Ishii

LineFORM

Ken Nakagaki, Sean Follmer and Hiroshi Ishii

This project introduces layer jamming as an enabling technology for designing


deformable, stiffness-tunable, thin sheet interfaces. Interfaces that exhibit tunable
stiffness properties can yield dynamic haptic feedback and shape deformation
capabilities. In contrast to particle jamming, layer jamming allows for constructing thin
and lightweight form factors of an interface. We propose five-layer structure designs
and an approach that composites multiple materials to control the deformability of the
interfaces. We also present methods to embed different types of sensing and
pneumatic actuation layers on the layer-jamming unit. Through three application
prototypes we demonstrate the benefits of using layer jamming in interface design.
Finally, we provide a survey of materials that have proven successful for layer jamming.

We propose a novel shape-changing interface that consists of a single line. Lines have
several interesting characteristics from the perspective of interaction design:
abstractness of data representation; a variety of inherent interactions/affordances; and
constraints such as boundaries or borderlines. By utilizing such aspects of lines
together with the added transformation capability, we present various applications in
different scenarios such as shape-changing cords, mobiles, body constraints, and data
manipulation to investigate the design space of line-based shape-changing interfaces.

80.

MirrorFugue

Xiao Xiao and Hiroshi Ishii


MirrorFugue is an installation for a player piano that evokes the impression that the
"reflection" of a disembodied pianist is playing the physically moving keys. Live music
emanates from a grand piano, whose keys move under the supple touch of a pianist's
hands reflected on the lacquered surface of the instrument. The pianist's face is
displayed on the music stand, with subtle expressions projecting the emotions of the
music. MirrorFugue recreates the feeling of a live performance, but no one is actually
there. The pianist is an illusion of light and mirrors, a ghost both present and absent.

MIT Media Lab

October 2015

Page 17

Viewing MirrorFugue evokes the sense of walking into a memory, where the pianist
plays without awareness of the viewer's presence; or, it is as if viewers were ghosts in
another's dream, able to sit down in place of the performing pianist and play along.

81.

82.

83.

MMODM: Massively
Multiplayer Online
Drum Machine

Joseph A. Paradiso, Tod Machover, Donald Derek H. and Basheer Tome

Pneumatic
Shape-Changing
Interfaces

Jifei Ou, Lining Yao, Ryuma Niiyama, Sean Follmer and Hiroshi Ishii

Radical Atoms

Hiroshi Ishii

MMODM is an online drum machine based on the Twitter streaming API, using tweets
from around the world to create and perform musical sequences together in real time.
Users anywhere can express 16-beat note sequences across 26 different instruments,
using plain-text tweets from any device. Meanwhile, users on the site itself can use the
graphical interface to locally DJ the rhythm, filters, and sequence blending. By
harnessing this duo of website and Twitter network, MMODM enables a whole new
scale of synchronous musical collaboration between users locally, remotely, across a
wide variety of computing devices, and across a variety of cultures.

An enabling technology to build shape-changing interfaces through pneumatically


driven, soft-composite materials. The composite materials integrate the capabilities of
both input sensing and active shape output. We explore four applications: a multi-shape
mobile device, table-top shape-changing tangibles, dynamically programmable texture
for gaming, and a shape-shifting lighting apparatus.

Radical Atoms is our vision of interactions with future materials. Radical Atoms takes a
leap beyond Tangible Bits by assuming a hypothetical generation of materials that can
change form and appearance dynamically, becoming as reconfigurable as pixels on a
screen. Radical Atoms is a computationally transformable and reconfigurable material
that is bidirectionally coupled with an underlying digital model (bits) so that dynamic
changes of physical form can be reflected in digital states in real time, and vice versa.
Alumni Contributors: Keywon Chung, Adam Kumpf, Amanda Parkes, Hayes Raffle and
Jamie B Zigelbaum

84.

TRANSFORM

Hiroshi Ishii, Sean Follmer, Daniel Leithinger, Philipp Schoessler, Amit Zoran and
LEXUS International
TRANSFORM fuses technology and design to celebrate its transformation from still
furniture to a dynamic machine driven by a stream of data and energy. TRANSFORM
aims to inspire viewers with unexpected transformations and the aesthetics of the
complex machine in motion. First exhibited at LEXUS DESIGN AMAZING MILAN (April
2014), the work comprises three dynamic shape displays that move over one thousand
pins up and down in real time to transform the tabletop into a dynamic tangible display.
The kinetic energy of the viewers, captured by a sensor, drives the wave motion
represented by the dynamic pins. The motion design is inspired by dynamic interactions
among wind, water, and sand in nature, Escher's representations of perpetual motion,
and the attributes of sand castles built at the seashore. TRANSFORM tells of the
conflict between nature and machine, and its reconciliation, through the ever-changing
tabletop landscape.

85.

Page 18

TRANSFORM:
Adaptive and
Dynamic Furniture

Luke Vink, Viirj Kan, Ken Nakagaki, Daniel Leithinger, Sean Follmer, Philipp
Schoessler, Amit Zoran, Hiroshi Ishii
Introducing TRANSFORM, a shape-changing desk. TRANSFORM is an exploration of
how shape display technology can be integrated into our everyday lives as interactive,
transforming furniture. These interfaces not only serve as traditional computing devices,
but also support a variety of physical activities. By creating shapes on demand or by
moving objects around, TRANSFORM changes the ergonomics and aesthetic
dimensions of furniture, supporting a variety of use cases at home and work: it holds
and moves objects like fruit, game tokens, office supplies, and tablets, creates dividers
on demand, and generates interactive sculptures to convey messages and audio.

October 2015

MIT Media Lab

Joseph M. Jacobson: Molecular Machines


Engineering at the limits of complexity with molecular-scale parts.

86.

87.

88.

Context-Aware
Biology

Joseph M. Jacobson and Charles Fracchia

Context-Aware
Pipette

Charles Fracchia, Jason Fischman, Matt Carney, and Joseph M. Jacobson

GeneFab

Bram Sterling, Kelly Chang, Joseph M. Jacobson, Peter Carr, Brian Chow, David
Sun Kong, Michael Oh and Sam Hwang

Current biological research workflows make use of disparate, poorly integrated systems
that impose a large mental burden on the scientist, leading to mistakes, often on long,
complex, and costly experimental procedures. The lack of open tools to assist in the
collection of distributed experimental conditions and data is largely responsible for
making protocols difficult to debug, and laboratory practice hard to learn. In this work,
we describe an open Protocol Descriptor Language (PDL) and system to enable a
context-rich, quantitative approach to biological research. We detail the development of
a closed-loop pipetting technology and a wireless sample-temperature sensor that
integrate with our Protocol Description platform, enabling novel, real-time experimental
feedback to the researcher, thereby reducing mistakes and increasing overall scientific
reproducibility.

Pipettes are the equivalent in biology of the keyboard for computer science: a key tool
that enables interface with the subject matter. In the case of the pipette, it enables the
scientist to move precise amounts of liquids. Pipette design hasn't changed in over 30
years. We've designed a new type of pipette that allows wireless, context-aware
operation.

What would you like to "build with biology"? The goal of the GeneFab project is to
develop technology for the rapid fabrication of large DNA molecules, with composition
specified directly by the user. Our intent is to facilitate the field of synthetic biology as it
moves from a focus on single genes to designing complete biochemical pathways,
genetic networks, and more complex systems. Sub-projects include: DNA error
correction, microfluidics for high throughput gene synthesis, and genome-scale
engineering (rE. coli).
Alumni Contributor: Chris Emig

89.

NanoFab

Kimin Jun, Jaebum Joo, and Joseph M. Jacobson


We are developing techniques to use a focused ion beam to program the fabrication of
nanowires-based nanostructures and logic devices.

90.

91.

Scaling Up DNA
Logic and Structures

Joseph M. Jacobson and Noah Jakimo

Synthetic
Photosynthesis

Joseph M. Jacobson and Kimin Jun

MIT Media Lab

Our goals include novel gene logic and data logging systems, as well as DNA scaffolds
that can be produced on commercial scales. State of the art in the former is limited by
finding analogous and orthogonal proteins for those used in current single-layer gates
and two-layered circuits. State of the art in the latter is constrained in size and efficiency
by kinetic limits on self-assembly. We have designed and plan to demonstrate
cascaded logic on chromosomes and DNA scaffolds that exhibit exponential growth.

We are using nanowires to build structures for synthetic photosynthesis for the solar
generation of liquid fuels.

October 2015

Page 19

Sepandar Kamvar: Social Computing


Creating sociotechnical systems that shape our urban environments.

92.

Microculture
NEW LISTING

93.

Storyboards

Josh Sarantitis, Sepandar Kamvar, Yonatan Cohen, Kathryn Grantham and Lisa
DePiano
Microculture gardens are a network of small-scale permaculture gardens that are aimed
at reimagining our urban food systems, remediating our air supply, and making our
streets more amenable to human-scale mobility. Microculture combines
micro-gardening with the principles of permaculture, creatively occupying viable space
throughout our communities for small-scale self-sustaining food forests. Micro-gardens
have proven to be successful for the production of a broad range of species, including
leafy vegetables, fruit, root vegetables, herbs, and more. Traditionally, container-based
micro-gardens occupy approximately one meter of space or less and are made from
found, up-cycled materials. Our innovations involve the combining of permaculture and
micro-gardening principles, developing materials and designs that allow for modularity,
mobility, easy replicability, placement in parking spots, and software that supports the
placement, creation, and maintenance of these gardens.
Sepandar Kamvar, Kevin Slavin, Jonathan Bobrow and Shantell Martin
Giving opaque technology a glass house, Storyboards present the tinkerers or owners
of electronic devices with stories of how their devices work. Just as the circuit board is a
story of star-crossed lovers Anode and Cathode with its cast of characters (resistor,
capacitor, transistor), Storyboards have their own characters driving a parallel visual
narrative.

94.

95.

The Dog
Programming
Language

Salman Ahmad and Sep Kamvar

Wildflower
Montessori

Sep Kamvar, Kim Smith, Yonatan Cohen, Kim Holleman, Nazmus Saquib,
Caroline Jaffe

Dog is a new programming language that makes it easy and intuitive to create social
applications. A key feature of Dog is built-in support for interacting with people. Dog
provides a natural framework in which both people and computers can be sent requests
and return results. It can perform a long-running computation while also displaying
messages, requesting information, or sending operations to particular individuals or
groups. By switching between machine and human computation, developers can create
powerful workflows and model complex social processes without worrying about
low-level technical details.

Wildflower is an open-source approach to Montessori learning. Its aim is to be an


experiment in a new learning environment, blurring the boundaries between
home-schooling and institutional schooling, between scientists and teachers, between
schools and the neighborhoods around them. At the core of Wildflower are nine
principles that define the approach. The Wildflower approach has been implemented by
several schools, which serve as a research platform for the development of Montessori
materials that advance the Montessori Method, software tools that enable Montessori
research, and social software that fosters the growth and connection of such schools.

96.

You Are Here

Sep Kamvar, Yonatan Cohen, Wesam Manassra, Pranav Ramkrishnan, Stephen


Rife, Jia Zhang, Edward Faulkner, Kim Smith, Asa Oines, Jake Sanchez, and
Jennifer Jang
You Are Here is an experiment in microurbanism. In this project, we are creating 100
maps each of 100 different cities. Each map gives a collective portrait of one aspect of
life in the city, and is designed to give communities meaningful micro-suggestions of
what they might do to improve their city. The interplay between the visualizations and
the community work they induce creates a collective, dynamic, urban-scale project.

Page 20

October 2015

MIT Media Lab

Kent Larson: Changing Places


Enabling dynamic, evolving places that respond to the complexities of life.

97.

ARkits: Architectural
Robotics Kits

Kent Larson, Luis Alberto Alonso Pastor, Ivan Fernandez, Hasier Larrea and
Carlos Rubio
In an urbanized world, where space is too valuable to be static and unresponsive,
ARkits provide a robotic kit of parts to empower real estate developers, furniture
manufacturers, architects, and "space makers" in general, to create a new generation
of transformable and intelligent spaces.

98.

BoxLab

Kent Larson and Jason Nawyn


The PlaceLab was a highly instrumented, apartment-scale, shared research facility
where new technologies and design concepts were tested and evaluated in the context
of everyday living. It was used by researchers until 2008 to collect fine-grained human
behavior and environmental data, and to systematically test and evaluate strategies
and technologies for the home in a natural setting with volunteer occupants. BoxLab is
a portable version with many of the data collection capabilities of PlaceLab. BoxLab can
be deployed in any home or workplace.
Alumni Contributors: Jennifer Suzanne Beaudin, Edward Burns, Manu Gupta, Pallavi
Kaushik, Aydin Oztoprak, Randy Rockinson and Emmanuel Munguia Tapia

99.

CityFARM

Camillee Richman, Elaine Kung, Emma Feshbach, Jordan Rogoff, Mathew Daiter,
Kent Larson, Caleb Harper, Edward Platt, Preethi Vaidyanathan and Sophia Jaffee
By 2030, nine billion people will populate the globe and six out of every 10 will live in
cities. The future of global food production will mandate a paradigm shift to
resource-leveraged and environmentally sound urban food-growing solutions. The
CityFARM project explores building-integrated agriculture and environmentally
optimized growing. We are exploring what it means technologically, environmentally,
and socially to design industrially scalable agricultural systems in the heart of urban
areas. Through innovative research, and through development of hydroponic and
aeroponic systems, diagnostic and networked sensing, building integration, and
reductive energy design, CityFARM methodology reduces water consumption by 90
percent, eliminates chemical pesticides, and reduces embodied energy in produce by a
factor of four. By fundamentally rethinking "grow it THERE and eat it HERE," we can
eliminate environmental contaminants and increase access to nutrient-dense produce
in our future cities.

100. CityHOME: 200 SQ FT

Kent Larson, Hasier Larrea, Daniel Goodman, Oier Ario, Phillip Ewing
Live large in 200 square feet! An all-in-one disentangled robotic furniture piece makes it
possible to live comfortably in a tiny footprint not only by magically reconfiguring the
space, but also by serving as a platform for technology integration and experience
augmentation. Two hundred square feet has never seemed so large.

101. CityOffice

Kent Larson, Hasier Larrea, Luis Alonso, Carlos Rubio


Architectural robotics enable a hyper-efficient, dynamically reconfigurable co-working
space that accommodates a wide range of activities in a small area.

102. CityScope Barcelona


NEW LISTING

MIT Media Lab

Kent Larson, Waleed Gowharji, Carson Smuts, J. Ira Winder and Yan Zhang
The "Barcelona" demo is an independent prototype designed to model and simulate
human interactions within a Barcelona-like urban environment. Different types of land
use (residential, office, and amenities) are configured into urban blocks and analyzed
with agent-based techniques.

October 2015

Page 21

103. CityScope
BostonBRT

Ryan Chin, Allenza Michel, Ariel Noyman, Jeffrey Rosenblum, Anson Stewart,
Phil Tinn, Ira Winder, Chris Zegras
CityScope is working with the Barr Foundation of Boston to develop a
tangible-interactive participatory environment for planning bus rapid transit (BRT).

104. CityScope Hamburg


NEW LISTING

Kent Larson, Ariel Noyman and J. Ira Winder


MIT CityScience is working with Hafencity University to develop CityScope for the
neighborhood of Rothenburgsort in Hamburg, Germany. The goal is to create an
interactive stakeholder engagement tool that also serves as the platform for joint
research of modules for city simulation. Researchers are developing modules for
walkability, neighborhood connectivity, energy efficiency, and economic activity, among
others.

105. CityScope Mark I:


Real-Time Data
Observatory

Ira Winder, Mohammad Hadhrawi, Carson Smuts, and Kent Larson

106. CityScope Mark II:


Scout

Ira Winder

107. CityScope Mark III:


Dynamic 3D

Ira Winder, Grady Sain

108. CityScope Mark IV:


Playground

Ira Winder and Ariel Noyman (SA+P)

109. CityScope Mark IVa:


Riyadh

Ira Winder

110. CityScope Mark IVb:


Land
Use/Transportation

Kent Larson, Carson Smuts and Ira Winder

Page 22

Real-time geospatial data is visualized on an exhibition-scale 3D city model. The model


is built of LEGO bricks, and visualization is performed by an array of calibrated
projectors. Through computation, GIS data is "LEGO-tized" to create a LEGO
abstraction of existing urban areas. Data layers include mobility systems, land use,
social media, business activity, windflow simulations, and more.

The CityScope "Scout" prototype integrates augmented reality with real-time


mathematical modeling of geospatial systems. In practice, the technology transforms
any tabletop into a canvas for land-use planning and walkability optimization. Users
perform rapid prototyping with LEGO bricks and receive real-time simulation and
evaluation feedback.

The Dynamic 3D prototype allows users to edit a digital model by moving physical 3D
abstractions of building typologies. Movements are automatically detected, scanned,
and digitized so as to generate inputs for computational analysis. 3D information is also
projected back onto the model to give the user feedback while edits are made.

Playground is a full-sized, tangible 3D environment for rapid prototyping of building


interventions in Kendall Square in Cambridge, Massachusetts. Through projection
mapping and onscreen displays, users can receive feedback about the impacts of their
interventions.

We recently led a workshop in Saudi Arabia, with staff from the Riyadh Development
Authority, to test a new version of our CityScope platform. With only an hour to work,
four teams of five professionals competed to develop a redevelopment proposal for a
neighborhood near the city center. The platform evaluated their designs according to
energy, daylighting, and walkability.

CityScope MarkIVb is programmed to demonstrate and model the relationship between


land use (live and work), population density, parking supply and demand, and traffic
congestion.

October 2015

MIT Media Lab

111. Context-Aware
Dynamic Lighting

Ronan Lonergan, Shaun Salzberg, Harrison Hall, and Kent Larson

112. Measuring Urban


Innovation

Talia Kaufmann, Kent Larson, Dan Harple and Victor Kashirin

113. Mobility on Demand


Systems

Kent Larson, Ryan C.C. Chin, Chih-Chao Chuang, William Lark, Jr., Brandon
Phillip Martin-Anderson and SiZhi Zhou

The robotic faade is conceived as a mass-customizable module that combines solar


control, heating, cooling, ventilation, and other functions to serve an urban apartment. It
attaches to the building "chassis" with standardized power, data, and mechanical
attachments to simplify field installation and dramatically increase energy performance.
The design makes use of an articulating mirror to direct shafts of sunlight to precise
points in the apartment interior. Tiny, low-cost, easily installed wireless sensors and
activity recognition algorithms allow occupants to use a mobile phone interface to map
activities of daily living to personalized sunlight positions. We are also developing
strategies to control LED luminaires to turn off, dim, or tune the lighting to more
energy-efficient spectra in response to the location, activities, and paths of the
occupants.

Cities are hubs for innovation, characterized by densely populated areas where people
and firms cluster together, share resources, and collaborate. In turn, dense cities show
higher rates of economic growth and viability. Yet, the specific places innovation occurs
in urban areas, and what the socioeconomic conditions are that encourage it, are still
elusive for both researches and policymakers. Understanding the social and spatial
settings that enable innovation to accrue will equip policymakers and developers with
the metrics to promote and sustain innovation in cities. This research will measure the
attributes of innovation districts across the US in terms of their land-use configurations
and population characteristics and behaviors. These measurements will be used to
identify the factors that enable innovation, with the goal of developing a methodological
approach for producing quantitative planning guidelines to support decision-making
processes.

Mobility on Demand (MoD) systems are fleets of lightweight electric vehicles at


strategically distributed electrical charging stations throughout a city. MoD systems
solve the "first and last mile" problem of public transit, providing mobility between transit
station and home/workplace. Users swipe a membership card at the MoD station and
drive a vehicle to any other station (one-way rental). The Vlib' system of 20,000+
shared bicycles in Paris is the largest and most popular one-way rental system in the
world. MoD systems incorporate intelligent fleet management through sensor networks,
pattern recognition, and dynamic pricing, and the benefits of Smart Grid technologies
include intelligent electrical charging (including rapid charging), vehicle-to-grid (V2G),
and surplus energy storage for renewable power generation and peak sharing for the
local utility. We have designed three MoD vehicles: CityCar, RoboScooter, and
GreenWheel bicycle. (Continuing the vision of William J. Mitchell.)

114. Persuasive Cities

Agnis Stibe, Ryan C. C. Chin and Kent Larson


Persuasive Cities research is aimed at advancing urban spaces to facilitate societal
changes. According to social science research, any well-designed environment can
become a strong influencer of what people think and do. There is an endlessly dynamic
interaction between a person, a particular behavior, and a specific environment.
Persuasive Cities research leverages this knowledge to engineer persuasive
environments and interventions for altering human behavior on a societal level. This
research is focused on socially engaging environments for supporting entrepreneurship
and innovation, reshaping routines and behavioral patterns in urban spaces, deploying
intelligent outdoor sensing for shifting mobility modes, enhancing environmentally
friendly behaviors through social norms, introducing interactive public feedback
channels to alter attitudes at scale, engaging residents through socially influencing
systems, exploring methods for designing persuasive neighborhoods, testing
agent-based models and simulations of behavioral interventions, and fostering adoption
of novel urban systems.

MIT Media Lab

October 2015

Page 23

115. Persuasive Electric


Vehicle

Kent Larson, Michael Lin and Agnis Stibe

116. Persuasive Urban


Mobility

Agnis Stibe, Matthias Wunsch, Alexandra Millonig, Chengzhen Dai, Stefan Seer,
Katja Schechtner, Ryan C. C. Chin and Kent Larson

Persuasive Electric Vehicle (PEV) addresses sedentary lifestyles, provides


energy-efficient mobility, and takes advantage of existing bicycle lanes. Designed as a
three-wheeler for stability, with a cover to protect from rain and the option for electric
assist, PEV makes biking compelling for various demographics. Various persuasive
interventions are displayed through user interaction with smartphones to facilitate
pedaling behavior. Influential strategies are designed for both the interior and exterior of
PEV. For example, an interior display can show how many previous riders have actually
pedaled while riding a particular PEV. The exterior of PEV can change color depending
on whether a rider actually pedals or not.

The effects of global climate change, in combination with rapid urbanization, have
forced cities to seek low-energy and less carbon-intensive modes of transport. Cities
have adopted policies like congestion pricing to encourage its citizens to give up private
automobiles and to use mass transit or bicycling and walking. In this research study, we
examine how persuasion technologies can be utilized to encourage positive modal
shifts in mobility behavior in cities. We are particularly interested in studying the key
persuasive strategies that enable, motivate, and trigger users to shift from high-energy
to low-energy modes. This project is a collaboration between the MIT Media Lab and
the Austrian Institute of Technology (AIT).
Alumni Contributors: Sandra Richter and Katja Schechtner

117. ViewCube

Kent Larson and Carson Smuts


A tangible device for real-time spatial movement and perspectival orientation between
physical and digital 3D models.

Andy Lippman: Viral Communications


Creating scalable technologies that evolve with user inventiveness.

118. 8K Time into Space


NEW LISTING

119. DbDb
NEW LISTING

Page 24

Andrew Lippman and Hisayuki Ohmata


8K Time into Space is a user interface for a video exploration system with an 8K
display. 8K is an ultra high-definition video system and it can present a huge amount of
visual content on one display. In our system, video thumbnails with shifted playback
time in chronological order are spaced out like tiles. The time range of a scene that a
viewer wants to check can be adjusted with a touch interface, and resolution of the
thumbnails is changed depending of the range. 8K Time into Space aims to provide
responsive and intuitive experiences for video consumption.
Andrew Lippman and Travis Rich
DbDb (pronounced DubDub) is a tool for collaborative, forkable data analysis. Working
to solve the challenges of creating reproducible and augmentable data analyses, DbDb
provides an interface for archiving data, executing code, and visualizing the tree of
forked analyses.

October 2015

MIT Media Lab

120. GIFGIF

Cesar A. Hidalgo, Andrew Lippman, Kevin Zeng Hu and Travis Rich


An animated GIF is a magical thing. It has the power to compactly convey emotion,
empathy, and context in a subtle way that text or emoticons often miss. GIFGIF is a
project to combine that magic with quantitative methods. Our goal is to create a tool
that lets people explore the world of GIFs by the emotions they evoke, rather than by
manually entered tags. A web site with 200,000 users maps the GIFs to an emotion
space and lets you peruse them interactively.

121. IoT Recorder


NEW LISTING

122. Me.TV
NEW LISTING

123. NewsClouds

Andrew Lippman and Thariq Shihipar


The physical world is increasingly coming online. We have things that measure, sense,
and broadcast to the rest of the world. We call this the Internet of Things (IoT). But our
cameras are blind to this new layer of metadata on reality. The IoT recorder is a camera
that understands what IoT devices it sees and what data they are streaming, thus
creating a rich information "caption-track" for the videos it records. Using this
meta-data, we intend to explore how this enables new video applications, starting with
live events, news and sports, instructional videos and intelligent time lapse/video
monitoring.
Andrew Lippman, Vivian Diep and Yasmine Rubinovitz
Me.TV is a web platform that combines the benefits of traditional television and
on-demand viewing. The center of Me.TV platform is a visual programming language
that enables users to create complex rules with simple interactions that create static
preferences, such as genre constraints, and plan for non-static ones, such as a current
mood and time of day, in as many channels as they want.
Andrew Lippman and Thariq Shihipar
Here, we visualize how a story unfolds and is narrated by competing sources.
NewsClouds visualizes the verbiage used by news organizations by creating word
clouds that emphasize the intersection and differences between sources. NewsClouds
allows interactive exploration of how these topics evolve and migrate between sources.

124. Plethora

Andrew Lippman and Tomer Weller


Plethora creates simultaneous, localized, personal broadcasting networks that allow
audiences to form on-the-fly and build their own media streams. The display ranges
from personal devices such as phones, to embedded screens in kitchens. It uses
Bluetooth LE beacons to emulate local broadcast stations that signal the proximity.
Plethora inverts the relationship between mobility and the multiplicity of screens with
which we interact on a moment-to-moment basis. It also considers media as migrating
away from larger, high definition screens to a universe of personal, anytime, anywhere
representations.

125. QUANTIFY

Cesar A. Hidalgo, Andrew Lippman, Kevin Zeng Hu and Travis Rich


QUANTIFY is a generalized framework and JavaScript library to allow rapid
multi-dimensional "measurement" of subjective qualities of media. The goal is to make
qualitative metrics quantized. For everything from measuring emotional responses of
content to the cultural importance of world landmarks, QUANTIFY helps to elicit the raw
human subjectivity that fills much of our lives, and makes it programmatically
actionable.

MIT Media Lab

October 2015

Page 25

126. SolarCoin
NEW LISTING

127. Sphera

Andrew Lippman and Ariel Ekblaw


Bitcoin generates net-new value from "mining" in a distributed network, but at what
real-world cost? We explore a solar-powered bitcoin mining rig, completely
self-sustained by nature s radiant energy. By avoiding institutional electricity (which
often entails an environmental and financial cost beyond the ROI of bitcoin mining), we
mine bitcoin with a bit more soul. This platform serves as a basis for exploring modified
Blockchain protocols, both for tweaking digital currency and investigating other
applications, such as smart contracts and distributed publishing.
Amir Lazarovich, Andrew Lippman
One future of media experience lies within a socially connected virtual world. VR started
almost 30 years ago and is now wearable, real-time, integrated with sensing, and
becoming transparent. Sphera realizes a socially driven 360-degree media space that
includes ambient scenery, visual exploration, and integration with friends. By combining
an Oculus Rift (VR heads-on display), Microsoft Kinect (depth sensor), and a natural
voice command interface, we created a socially connected, 360-degree immersive
virtual world for media exploration and selection, as well as big-data manipulation and
visualization.

128. SuperGlue

Tomer Weller and Andrew Lippman


Glue is a prototyping engine to support news and narrative analysis. It has evolved to
be "SuperGlue," with a new task of allowing browsing general material to identify
engineering projects. The growing set of existing modules analyzes web pages, video,
and exogenous data such as tweets, and creates fine-grained metadata, including
frame-by-frame analysis for video. We use this to organize material for presentation,
analysis, and summarization. Currently, the system provides named-entity extraction,
audio expression markers, face detectors, scene/edit point locators, excitement
trackers, and thumbnail summarization. Glue includes a video recorder and processes
14 DirecTV feeds, as well as video content crawled from the web. Video is retained
dependent on storage capacity and the database is permanent. SuperGlue is the
metadata driver for most Ultimate Media projects--a "digestion system" for mass media.

129. The Third Hand


NEW LISTING

130. VR Codes

Andrew Lippman and Brian Tice


All too often there are tasks we have to perform that require a third hand: one to hold a
part, one to hold a tool, and one to hold an auxiliary piece. In this project, we construct
parallel data tracks that are embedded in video whose purpose is to cause active
devices such as Raspberry Pis and Arduinos to take action in response to what is
happening on the screen. The goal is to implement tools at the viewing site that assist
the viewer in doing a task; for example, moving a part to the right position for assembly,
holding it in place, or picking it up. This work is related to the IOT recorder.
Andy Lippman and Grace Woo
VR Codes are dynamic data invisibly hidden in television and graphic displays. They
allow the display to present simultaneously visual information in an unimpeded way,
and real-time data to a camera. Our intention is to make social displays that many can
use at once; using VR codes, users can draw data from a display and control its use on
a mobile device. We think of VR Codes as analogous to QR codes for video, and
envision a future where every display in the environment contains latent information
embedded in VR codes.

131. Wall of Now


NEW LISTING

Page 26

Andrew Lippman and Tomer Weller


Wall of Now is a multi-dimensional media browser of recent news items. It attempts to
address our need to know everything by presenting a deliberately overwhelming
amount of media, while simplifying the categorization of the content into single entities.
Every column in the wall represents a different type of entity: people, countries, states,
companies, and organizations. Each column contains the top-trending stories of that
type in the last 24 hours. Pressing on an entity will reveal a stream of video that relates

October 2015

MIT Media Lab

to that specific entity. The Wall of Now is a single-view experience that challenges
previous perceptions of screen space utilization towards a future of extremely large,
high-resolution displays.

Tod Machover: Opera of the Future


Extending expression, learning, and health through innovations in musical composition,
performance, and participation.

132. Ambisonic
Surround-Sound
Audio Compression

Tod Machover and Charles Holbrow

133. Breathing Window

Tod Machover and Rebecca Kleinberger

NEW LISTING

134. City Symphonies:


Massive Musical
Collaboration

Traditional music production and studio engineering depends on dynamic range


compression audio signal processors that precisely and dynamically control the gain of
an audio signal in the time domain. This project expands on the traditional dynamic
range compression model by adding a spatial dimension. Ambisonic Compression
allows audio engineers to dynamically control the spatial properties of a
three-dimensional sound field, opening new possibilities for surround-sound design and
spatial music performance.

Breathing Window is a tool for non-verbal dialogue that reflects on your own breathing
while also offering a window on another person's respiration. This prototype is an
example of shared human experiences (SHEs) crafted to improve the quality of human
understanding and interactions. Our work on SHEs focuses on first encounters with
strangers. We meet strangers every day, and without prior background knowledge of
the individual we often form opinions based on prejudices and differences. In this work,
we bring respiration to the foreground as one common experience of all living
creatures.
Tod Machover, Akito Van Troyer, Benjamin Bloomberg, Charles Holbrow, David
Nunez, Simone Ovsey, Sarah Platte, Bryn Bliska, Rbecca Kleinberger, Peter
Alexander Torpey and Garrett Parrish
Until now, the impact of crowdsourced and interactive music projects has been limited:
the public contributes a small part of the final result, and is often disconnected from the
artist leading the project. We believe that a new musical ecology is needed for true
creative collaboration between experts and amateurs. Toward this goal, we have been
creating "city symphonies," each collaboratively composed with an entire city. We
designed the infrastructure needed to bring together an unprecedented number of
people, including a variety of web-based music composition applications, a social
media framework, and real-world community-building activities. We have premiered city
symphonies in Toronto, Edinburgh, Perth, and Lucerne, and are now developing, with
the support of the Knight Foundation, a symphony for Detroit, our first US city. We are
also working on scaling this process by mentoring independent groups, beginning with
the city of Akron, Ohio.

135. Command Not Found

David Nunez, Tod Machover, Cynthia Breazeal


A performance between a human and a robot tells the story of growing older and trying
to maintain friendships with those we meet along the way. This project explores
live-coding with a robot, in which the actor creates and executes software on a robot in
real time; the audience can watch the program evolve on screen, and the code itself is
part of the narrative.

MIT Media Lab

October 2015

Page 27

136. Death and the


Powers: Global
Interactive Simulcast

Tod Machover, Peter Torpey, Ben Bloomberg, Elena Jessop, Charles Holbrow,
Simone Ovsey, Garrett Parrish, Justin Martinez, and Kevin Nattinger

137. Death and the


Powers: Redefining
Opera

Tod Machover, Ben Bloomberg, Peter Torpey, Elena Jessop, Bob Hsiung, Akito
van Troyer

138. Empathy and the


Future of Experience

Tod Machover, Akito Van Troyer, Benjamin Bloomberg, Bryn Bliska, Charles
Holbrow, David Nunez, Rbecca Kleinberger, Simone Ovsey, Sarah Platte, Peter
Torpey, Kelly Donovan, Meejin Yoon and the Empathy and Experience class

The live global interactive simulcast of the final February 2014 performance of "Death
and the Powers" in Dallas made innovative use of satellite broadcast and Internet
technologies to expand the boundaries of second-screen experience and interactivity
during a live remote performance. In the opera, Simon Powers uploads his mind,
memories, and emotions into The System, represented onstage through reactive
robotic, visual, and sonic elements. Remote audiences, via simulcast, were treated as
part of The System alongside Powers and the operabots. Audiences had an omniscient
view of the action of the opera, as presented through the augmented, multi-camera
video and surround sound. Multimedia content delivered to mobile devices, through the
Powers Live app, privileged remote audiences with perspectives from within The
System. Mobile devices also allowed audiences to influence The System by affecting
the illumination of the Winspear Opera House's Moody Foundation Chandelier.

"Death and the Powers" is a groundbreaking opera that brings a variety of


technological, conceptual, and aesthetic innovations to the theatrical world. Created by
Tod Machover (composer), Diane Paulus (director), and Alex McDowell (production
designer), the opera uses the techniques of tomorrow to address age-old human
concerns of life and legacy. The unique performance environment, including
autonomous robots, expressive scenery, new Hyperinstruments, and human actors,
blurs the line between animate and inanimate. The opera premiered in Monte Carlo in
fall 2010, with additional performances in Boston and Chicago in 2011 and a new
production with a global, interactive simulcast in Dallas in February 2014. The DVD of
the Dallas performance of Powers was released in April 2015.

Nothing is more important in today's troubled world as the process of eliminating


prejudice and misunderstanding, and replacing them with communication and empathy.
We explore the possibility of creating public experiences to dramatically increase
individual and community awareness of the power of empathy on an unprecedented
scale. We draw on numerous precedents from the Opera of the Future group that have
proposed concepts and technologies to inspire and intensify human connectedness
(such as Sleep No More, Death and the Powers, Vocal Vibrations, City Symphonies,
and Hyperinstruments) and from worldwide instances of transformative shared human
experience (such as the Overview Effect, Human Libraries, Immersive Theatre, and
non-sectarian spiritual traditions). The objective is to create a model of a multisensory,
participatory, spatially radical installation that will break down barriers between people
of immensely different backgrounds, providing instantaneous understanding of as well
as long-term commitment to empathic communication.

139. Fensadense

Tod Machover, Ben Bloomberg, Peter Torpey, Garrett Parrish, Kevin King
Fensadense is a new work for 10-piece ensemble composed by Tod Machover,
commissioned for the Lucerne Festival in summer 2015. The project represents the
next generation of hyperinstruments, involving the measurement of relative qualities of
many performers where previous systems only looked at a single performer.
Off-the-shelf components were used to collect data about movement and muscle
tension of each musician. The data was analyzed using the Hyperproduction platform to
create meaningful production control for lighting and sound systems based on the
connection of the performers, with a focus on qualities such as momentum, connection,
and tension of the ensemble as a whole. The project premiered at the Lucerne Festival,
and a European tour is scheduled for spring 2016.

Page 28

October 2015

MIT Media Lab

140. Hi-Lo Conductor

Tod Machover, Sarah Platte and David Nunez


We designed a system in which a novice, with limited musical background, can perform
alongside an experienced conductor to create an interpretation of a classical piece of
music. A practiced conductor will use precise, nuanced movements to drive tempo and
articulation of the performance, while the novice uses a more intuitive gestural
vocabulary to change the expressive properties of solo instrumentation. Using readily
available and inexpensive technology, we can capture and decode both experienced
and novice gestures to establish a unique kind of musical collaboration.

141. Hyperinstruments

Tod Machover
The Hyperinstruments project creates expanded musical instruments and uses
technology to give extra power and finesse to virtuosic performers. They were designed
to augment a wide range of traditional musical instruments and have been used by
some of the world's foremost performers (Yo-Yo Ma, the Los Angeles Philharmonic,
Peter Gabriel, and Penn & Teller). Research focuses on designing computer systems
that measure and interpret human expression and feeling, exploring appropriate
modalities and content of interactive art and entertainment environments, and building
sophisticated interactive musical instruments for non-professional musicians, students,
music lovers, and the general public. Recent projects involve the production a new
version of the "classic" Hyperstring Trilogy for the Lucerne Festival, and the design of a
new generation of Hyperinstruments, for Fensadense and other projects, that
emphasizes measurement and interpretation of inter-player expression and
communication, rather than simply the enhancement of solo performance.
Alumni Contributors: Roberto M. Aimi, Mary Farbood, Ed Hammond, Tristan Jehan,
Margaret Orth, Dan Overholt, Egon Pasztor, Joshua Strickon, Gili Weinberg and Diana
Young

142. Hyperproduction:
Advanced Production
Systems

Tod Machover and Benjamin Bloomberg

143. Hyperscore

Tod Machover

Hyperproduction is a conceptual framework and a software toolkit that allows producers


to specify a descriptive computational model and consequently an abstract state for a
live experience through traditional operating paradigms, such as mixing audio or
operation of lighting, sound, and video systems. The hyperproduction system is able to
interpret this universal state and automatically utilize additional production systems,
allowing for a small number of producers to cohesively guide the attention and
perspective of an audience using many or very complex production systems
simultaneously. The toolkit is under active development and has been used for new
pieces such as Fensadense, and to recreate older systems such as those for the
original Hyperstring Triolgy as part of the Lucerne Festival in 2015. Work continues to
enable new structures and abstraction within the framework.

Hyperscore is an application to introduce children and non-musicians to musical


composition and creativity in an intuitive and dynamic way. The "narrative" of a
composition is expressed as a line-gesture, and the texture and shape of this line are
analyzed to derive a pattern of tension-release, simplicity-complexity, and variable
harmonization. The child creates or selects individual musical fragments in the form of
chords or melodic motives, and layers them onto the narrative-line with expressive
brushstrokes. The Hyperscore system automatically realizes a full composition from a
graphical representation. Currently, Hyperscore uses a mouse-based interface; the final
version will support freehand drawing, and integration with the Music Shapers and
Beatbugs to provide a rich array of tactile tools for manipulation of the graphical score.
Alumni Contributors: Mary Farbood, Ed Hammond, Tristan Jehan, Margaret Orth, Dan
Overholt, Egon Pasztor, Joshua Strickon, Gili Weinberg and Diana Young

MIT Media Lab

October 2015

Page 29

144. Maestro Myth:


Exploring the Impact
of Conducting
Gestures on
Musician's Body and
Sounding Result
NEW LISTING

145. Media Scores

Tod Machover and Sarah Platte


Expert or fraud, the powerful person in front of an orchestra or choir attracts both hate
and admiration. But what is the actual influence a conductor has on the musician and
the sounding result? To throw light on the fundamental principles of this special gestural
language, we try to prove a direct correlation between the conductor's gestures, muscle
tension, and the physically measurable reactions of musicians in onset-precision,
muscle tension, and sound quality. We also measure whether the mere form of these
gestures causes different levels of stress or arousal. With this research we aim not only
to contribute to the development of a theoretical framework on conducting, but also to
enable a precise mapping of gestural parameters in order to develop and demonstrate
a new system to the optional enhancement of musical learning, performance, and
expression.
Tod Machover and Peter Torpey
Media Scores extends the concept of a musical score to other modalities, facilitating the
process of authoring and performing multimedia compositions and providing a medium
through which to realize a modern-day Gesamtkunstwerk. The web-based Media
Scores environment and related show control systems leverage research into
multimodal representation and encoding of expressive intent. Using such a tool, the
composer will be able to shape an artistic work that may be performed through a variety
of media and modalities. Media Scores offer the potential for authoring content using
live performance data as well as audience participation and interaction. This paradigm
bridges the extremes of the continuum from composition to performance, allowing for
improvisation. The Media Score also provides a common point of reference in
collaborative productions, as well as the infrastructure for real-time control of
technologies used during live performance.

146. Music Visualization


via Musical
Information Retrieval
NEW LISTING

147. Remote Theatrical


Immersion:
Extending "Sleep No
More"

Page 30

Tod Machover and Thomas Sanchez


In a study of human perception of music in relation to different representations of video
graphics, this project explores the automatic synchronization in real time between audio
and image. This aims to make the relationship seem smaller and more consistent. The
connection is made using techniques that rely on audio signal processing to
automatically extract data from the music, which subsequently are mapped to the visual
objects. The visual elements are influenced by data obtained from various Musical
Information Retrieval (MIR) techniques. By visualizing music, one can stimulate the
nervous system to recognize different musical patterns and extract new features.
Tod Machover, Punchdrunk, Akito Van Troyer, Ben Bloomberg, Gershon Dublon,
Jason Haas, Elena Jessop, Brian Mayton, Eyal Shahar, Jie Qi, Nicholas Joliat,
and Peter Torpey
We have collaborated with London-based theater group Punchdrunk to create an online
platform connected to their NYC show, Sleep No More. In the live show, masked
audience members explore and interact with a rich environment, discovering their own
narrative pathways. We have developed an online companion world to this real-life
experience, through which online participants partner with live audience members to
explore the interactive, immersive show together. Pushing the current capabilities of
web standards and wireless communications technologies, the system delivers
personalized multimedia content, allowing each online participant to have a unique
experience co-created in real time by his own actions and those of his onsite partner.
This project explores original ways of fostering meaningful relationships between online
and onsite audience members, enhancing the experiences of both through the
affordances that exist only at the intersection of the real and the virtual worlds.

October 2015

MIT Media Lab

148. Requiem for


Rhinoceros
NEW LISTING

149. Sound Cycles


NEW LISTING

Tod Machover and David Nunez


An installation and performance commemorating the final days of the nearly-extinct
northern white rhino, "Requiem for Rhinoceros" calls attention to the four northern white
rhinos that exist on earth today by giving flight to robotic, puppet effigies, investigating
our collective empathic responses. This project demonstrates research in animism and
computational choreography as a means to evoke expressivity in machine motion in
performance.
Tod Machover and Charles Holbrow
Sound Cycles is a new interface for exploring, re-mixing, and composing with large
volumes of audio content. The project presents a simple and intuitive interface for
scanning through long audio files or pre-recorded music. Sound Cycles integrates with
the existing Digital Audio Workstation for on-the-fly editing, audio analysis, and feature
extraction.

150. Using the Voice As a


Tool for
Self-Reflection

Tod Machover and Rebecca Kleinberger

151. Vocal Vibrations:


Expressive
Performance for
Body-Mind Wellbeing

Tod Machover, Charles Holbrow, Elena Jessop, Rebecca Kleinberger, Le


Laboratoire, and the Dalai Lama Center at MIT

Our voice is an important part of our individuality. From the voices of others, we
understand a wealth of non-linguistic information, such as identity, social-cultural clues,
and emotional state. But the relationship we have with our own voice is less obvious.
We don't hear it the way others do, and our brain treats it differently from any other
sound. Yet its sonority is deeply connected with how we are perceived by society and
how we see ourselves, body and mind. This project is composed of software, devices,
installations, and thoughts used to challenge us to gain new insights on our voices. To
increase self-awareness, we propose different ways to extend, project, and visualize
the voice. We show how our voices sometimes escape our control, and we explore the
consequences in terms of self-reflection, cognitive processes, therapy, affective
features visualization, and communication improvement.

Vocal Vibrations explores relationships between human physiology and the vibrations
of the voice. The voice is an expressive instrument that nearly everyone possesses and
that is intimately linked to the physical form. In collaboration with Le Laboratoire and the
MIT Dalai Lama Center, we examine the hypothesis that voices can influence mental
and physical health through physico-physiological phenomena. The first Vocal
Vibrations installation premiered in Paris, France, in March 2014. The public "Chapel"
space of the installation encouraged careful meditative listening. A private "Cocoon"
environment guided an individual to explore his/her voice, augmented by tactile and
acoustic stimuli. Vocal Vibrations then had a successful showing as the inaugural
installation at the new Le Laboratoire Cambridge from November 2014 through March
2015. The installation was incorporated into Le Laboratoire's Memory/Witness of the
Unimaginable exhibit, April 17-August 16, 2015.
Alumni Contributor: Eyal Shahar

Pattie Maes: Fluid Interfaces


Integrating digital interfaces more naturally into our physical lives, enabling insight,
inspiration, and interpersonal connections.

152. Augmented Airbrush

Roy Shilkrot, Amit Zoran, Pattie Maes and Joseph A. Paradiso


We present an augmented handheld airbrush that allows unskilled painters to
experience the art of spray painting. Inspired by similar smart tools for fabrication, our
handheld device uses 6DOF tracking, mechanical augmentation of the airbrush trigger,
and a specialized algorithm to let the painter apply color only where indicated by a

MIT Media Lab

October 2015

Page 31

reference image. It acts both as a physical spraying device and as an intelligent digital
guiding tool that provides manual and computerized control. Using an inverse rendering
approach allows for a new augmented painting experience with unique results. We
present our novel hardware design, control software, and a discussion of the
implications of human-computer collaborative painting.

153. Enlight

Tal Achituv, Natan Linder, Rony Kubat, Pattie Maes and Yihui Saw
In physics education, virtual simulations have given us the ability to show and explain
phenomena that are otherwise invisible to the naked eye. However, experiments with
analog devices still play an important role. They allow us to verify theories and discover
ideas through experiments that are not constrained by software. What if we could
combine the best of both worlds? We achieve that by building our applications on a
projected augmented reality system. By projecting onto physical objects, we can paint
the phenomena that are invisible. With our system, we have built "physical
playgrounds": simulations that are projected onto the physical world and that respond to
detected objects in the space. Thus, we can draw virtual field lines on real magnets,
track and provide history on the location of a pendulum, or even build circuits with both
physical and virtual components.

154. EyeRing: A Compact,


Intelligent Vision
System on a Ring

Roy Shilkrot and Suranga Nanayakkara

155. FingerReader

Roy Shilkrot, Jochen Huber, Pattie Maes and Suranga Nanayakkara

EyeRing is a wearable, intuitive interface that allows a person to point at an object to


see or hear more information about it. We came up with the idea of a micro-camera
worn as a ring on the index finger with a button on the side, which can be pushed with
the thumb to take a picture or a video that is then sent wirelessly to a mobile phone to
be analyzed. The user tells the system what information they are interested in and
receives the answer in either auditory or visual form. The device also provides some
simple haptic feedback. This finger-worn configuration of sensors and actuators opens
up a myriad of possible applications for the visually impaired as well as for sighted
people.

FingerReader is a finger-worn device that helps the visually impaired to effectively and
efficiently read paper-printed text. It works in a local-sequential manner for scanning
text that enables reading of single lines or blocks of text, or skimming the text for
important sections while providing auditory and haptic feedback.

156. GlassProv Improv


Comedy System

Pattie Maes, Scott Greenwald, Baratunde Thurston and Cultivated Wit

157. HandsOn: A Gestural


System for Remote
Collaboration Using
Augmented Reality

Kevin Wong and Pattie Maes

Page 32

As part of a Google-sponsored Glass developer event, we created a Glass-enabled


improv comedy show together with noted comedians from ImprovBoston and Big Bang
Improv. The actors, all wearing Glass, received cues in real time in the course of their
improvisation. In contrast with the traditional model for improv comedy, punctuated by
"freezing" and audience members shouting suggestions, using Glass allowed actors to
seamlessly integrate audience suggestions. Actors and audience members agreed that
this was a fresh take on improv comedy. It was a powerful demonstration that cues on
Glass are suitable for performance: actors could become aware of the cues without
having their concentration or flow interrupted, and then view them at an appropriate
time thereafter.

2D screens, even stereoscopic ones, limit our ability to interact with and collaborate on
3D data. We believe that an augmented reality solution, where 3D data is seamlessly
integrated in the real world, is promising. We are exploring a collaborative augmented
reality system for visualizing and manipulating 3D data using a head-mounted,
see-through display, that allows for communication and data manipulation using simple
hand gestures.

October 2015

MIT Media Lab

158. HRQR
NEW LISTING

159. Hybrid Objects

Valentin Heun, Eythor Runar Eiriksson


HRQR is a visual Human and Machine Readable Quick Response Code that can
replace usual 2D barcode and QR Code applications. The code can be read by humans
in the same way it can be read by machines. Instead of relying on a computational error
correction, the system allows a human to read the message and therefore is able to
reinterpret errors in the visual image. The design is highly inspired by a 2,000 year-old
Arabic calligraphy called Kufic.
Pattie Maes and Valentin Heun
A web technology-based update of Smarter Objects and the reality editor project.

160. Invisibilia: Revealing


Invisible Data as a
Tool for Experiential
Learning
NEW LISTING

Pattie Maes, Judith Amores Fernandez and Xavier Benavides Palos


Invisibilia seeks to explore the use of Augmented Reality (AR), head-mounted displays
(HMD), and depth cameras to create a system that makes invisible data from our
environment visible, combining widely accessible hardware to visualize layers of
information on top of the physical world. Using our implemented prototype, the user can
visualize, interact with, and modify properties of sound waves in real time by using
intuitive hand gestures. Thus, the system supports experiential learning about certain
physics phenomena through observation and hands-on experimentation.

161. JaJan!: Remote


Language Learning in
Shared Virtual Space

Kevin Wong, Takako Aikawa and Pattie Maes

162. LuminAR

Natan Linder, Pattie Maes and Rony Kubat

JaJan! is a telepresence system wherein remote users can learn a second language
together while sharing the same virtual environment. JaJan! can support five aspects of
language learning: learning in context; personalization of learning materials; learning
with cultural information; enacting language-learning scenarios; and supporting
creativity and collaboration. Although JaJan! is still in an early stage, we are confident
that it will bring profound changes to the ways in which we experience language
learning and can make a great contribution to the field of second language education.

LuminAR reinvents the traditional incandescent bulb and desk lamp, evolving them into
a new category of robotic, digital information devices. The LuminAR Bulb combines a
Pico-projector, camera, and wireless computer in a compact form factor. This
self-contained system enables users with just-in-time projected information and a
gestural user interface, and it can be screwed into standard light fixtures everywhere.
The LuminAR Lamp is an articulated robotic arm, designed to interface with the
LuminAR Bulb. Both LuminAR form factors dynamically augment their environments
with media and information, while seamlessly connecting with laptops, mobile phones,
and other electronic devices. LuminAR transforms surfaces and objects into interactive
spaces that blend digital media and information with the physical space. The project
radically rethinks the design of traditional lighting objects, and explores how we can
endow them with novel augmented-reality interfaces.

163. MARS: Manufacturing


Augmented Reality
System

Rony Daniel Kubat, Natan Linder, Ben Weissmann, Niaja Farve, Yihui Saw and
Pattie Maes

164. Move Your Glass

Niaja Farve and Pattie Maes

Projected augmented reality in the manufacturing plant can increase worker


productivity, reduce errors, gamify the workspace to increase worker satisfaction, and
collect detailed metrics. We have built new LuminAR hardware customized for the
needs of the manufacturing plant and software for a specific manufacturing use case.

Move Your Glass is an activity and behavior tracker that also tries to increase wellness
by nudging the wearer to engage in positive behaviors.

MIT Media Lab

October 2015

Page 33

165. Open Hybrid


NEW LISTING

Valentin Heun, Shunichi Kasahara, James Hobin, Kevin Wong, Michelle Suh,
Benjamin F Reynolds, Marc Teyssier, Eva Stern-Rodriguez, Afika A Nyati, Kenny
Friedman, Anissa Talantikite, Andrew Mendez, Jessica Laughlin, Pattie Maes
Open Hybrid is an open source augmented reality platform for physical computing and
Internet of Things. It is based on the web and Arduino.

166. Reality Editor:


Programming
Smarter Objects

Valentin Heun, James Hobin, Pattie Maes

167. Remot-IO: A System


for Reaching into the
Environment of a
Remote Collaborator

Judith Amores Fernandez, Xavier Benavides Palos and Pattie Maes

168. Scanner Grabber

Tal Achituv

The Reality Editor system supports editing the behavior and interfaces of so-called
"smart objects": objects or devices that have an embedded processor and
communication capability. Using augmented reality techniques, the Reality Editor maps
graphical elements directly on top of the tangible interfaces found on physical objects,
such as push buttons or knobs. The Reality Editor allows flexible reprogramming of the
interfaces and behavior of the objects, as well as defining relationships between smart
objects in order to easily create new functionalities.

Remot-IO is a system for mobile collaboration and remote assistance around


Internet-connected devices. It uses two head-mounted displays, cameras, and depth
sensors to enable a remote expert to be immersed in a local user's point of view, and to
control devices in that user's environment. The remote expert can provide guidance
through hand gestures that appear in real time in the local user's field of view as
superimposed 3D hands. In addition, the remote expert can operate devices in the
novice's environment and bring about physical changes by using the same hand
gestures the novice would use. We describe a smart radio where the knobs of the radio
can be controlled by local and remote users. Moreover, the user can visualize, interact,
and modify properties of sound waves in real time by using intuitive hand gestures.

Scanner Grabber is a digital police scanner that enables reporters to record, playback,
and export audio, as well as archive public safety radio (scanner) conversations. Like a
TiVo for scanners, it's an update on technology that has been stuck in the last century.
It's a great tool for newsrooms. For instance, a problem for reporters is missing the
beginning of an important police incident because they have stepped away from their
desk at the wrong time. Scanner Grabber solves this because conversations can be
played back. Also, snippets of exciting audio, for instance a police chase, can be
exported and embedded online. Reporters can listen to files while writing stories, or
listen to older conversations to get a more nuanced grasp of police practices or
long-term trouble spots. Editors and reporters can use the tool for collaborating, or
crowdsourcing/public collaboration.

169. ScreenSpire

Pattie Maes, Tal Achituv, Chang Long Zhu Jin and Isa Sobrinho
Screen interactions have been shown to contribute to increases in stress, anxiety, and
deficiencies in breathing patterns. Since better respiration patterns can have a positive
impact on wellbeing, ScreenSpire improves respiration patterns during information work
using subliminal biofeedback. By using subtle graphical variations that are tuned to
attempt to influence the user subconsciously user distraction and cognitive load are
minimized. To enable a true seamless interaction, we have adapted an RF based
sensor (ResMed S+ sleep sensor) to serve as a screen-mounted contact-free and
respiration sensor. Traditionally, respiration sensing is achieved with either invasive or
on-skin sensors (such as a chest belt); having a contact-free sensor contributes to
increased ease, comfort, and user compliance, since no special actions are required
from the user.

Page 34

October 2015

MIT Media Lab

170. ShowMe: Immersive


Remote Collaboration
System with 3D Hand
Gestures

Pattie Maes, Judith Amores Fernandez and Xavier Benavides Palos

171. SmileCatcher

Niaja Farve and Pattie Maes

ShowMe is an immersive mobile collaboration system that allows remote users to


communicate with peers using video, audio, and gestures. With this research, we
explore the use of head-mounted displays and depth sensor cameras to create a
system that (1) enables remote users to be immersed in another person's view, and (2)
offers a new way of sending and receiving the guidance of an expert through 3D hand
gestures. With our system, both users are surrounded in the same physical
environment and can perceive real-time inputs from each other.

SmileCatcher is a game to be played solely or in groups that attempts to increase


happiness. Research has shown that smiling correlates directly to happiness and can
even produce happiness in a person. A user playing the game tries to collect as many
smiles as they can from real people they interact with throughout the day. In
single-player mode the user compares scores over multiple days, while multiple players
compare their scores to one another. The objective of the tool is to encourage positive
social interactions through gamification.

172. STEM Accessibility


Tool

Pattie Maes and Rahul Namdev

173. TagMe

Pattie Maes, Judith Amores Fernandez and Xavier Benavides Palos

We are developing a very intuitive and interactive platform to make complex


information--especially science, technology, engineering, and mathematics (STEM)
material--truly accessible to blind and visually impaired students by using a tactile
device with no loss of information compared with printed materials. A key goal of this
project is to develop tactile information-mapping protocols through which the tactile
interface can best convey educational and other graphical materials.

TagMe is an end-user toolkit for easy creation of responsive objects and environments.
It consists of a wearable device that recognizes the object or surface the user is
touching. The user can make everyday objects come to life through the use of RFID tag
stickers, which are read by an RFID bracelet whenever the user touches the object. We
present a novel approach to create simple and customizable rules based on emotional
attachment to objects and social interactions of people. Using this simple technology,
the user can extend their application interfaces to include physical objects and surfaces
into their personal environment, allowing people to communicate through everyday
objects in very low-effort ways.

174. The Challenge

Natasha Jaques, Niaja Farve, Pattie Maes and Rosalind W. Picard


Individuals who work in sedentary occupations are at increased risk of a number of
serious health consequences. This project involves both a tool and an experiment
aimed at decreasing sedentary activity and promoting social connections among
members of the MIT Media Lab. Our system will ask participants to sign up for short
physical challenges (ping pong, foosball, walking) and pair them with a partner to
perform the activity. Participants' overall activity levels will be monitored with an activity
tracker during the course of the study to assess the effectiveness of the system.

MIT Media Lab

October 2015

Page 35

Neri Oxman: Mediated Matter


Designing for, with, and by nature.

175. 3D Printing of
Functionally Graded
Materials

Neri Oxman and Steven Keating

176. Additive
Manufacturing in
Glass:
Electrosintering and
Spark Gap Glass

Neri Oxman, Steven Keating, John Klein

177. Anthozoa

Neri Oxman

Functionally graded materials--materials with spatially varying composition or


microstructure--are omnipresent in nature. From palm trees with radial density
gradients, to the spongy trabeculae structure of bone, to the hardness gradient found in
many types of beaks, graded materials offer material and structural efficiency. But in
man-made structures such as concrete pillars, materials are typically volumetrically
homogenous. While using homogenous materials allows for ease of production,
improvements in strength, weight, and material usage can be obtained by designing
with functionally graded materials. To achieve graded material objects, we are working
to construct a 3D printer capable of dynamic mixing of composition material. Starting
with concrete and UV-curable polymers, we aim to create structures, such as a
bone-inspired beam, which have functionally graded materials. This research was
sponsored by the NSF EAGER award: Bio-Beams: FGM Digital Design & Fabrication.

Our initial experiments in spark electrosintering fabrication have demonstrated a


capacity to solidify granular materials (35-88 micron soda ash glass powder) rapidly
using high voltages and power in excess of 1 kW. The testbed high-voltage setup
comprises a 220V 60A variable autotransformer and a 14,400V line transformer. There
are two methods to form members using electrosintering: the one-electrode drag (1ED)
and two-electrode drag (2ED) techniques. The 1ED leaves the first electrode static
while dragging the second through the granular mixture. This maintains a live current
through the drag path and increases the thickness of the member due to the dissipation
of heat. Large member elements have been produced with a tube diameter of around
0.75". The 2ED method pulls both electrodes through the granular mixture together,
sintering the material between the electrodes in a more controlled manner.

A 3D-printed dress was debuted during Paris Fashion Week Spring 2013 as part of
collaboration with fashion designer Iris Van Herpen for her show "Voltage." The
3D-printed skirt and cape were produced using Stratasys' unique Objet Connex
multi-material 3D printing technology, which allows a variety of material properties to be
printed in a single build. This allowed both hard and soft materials to be incorporated
within the design, crucial to the movement and texture of the piece. Core contributers
include: Iris Van Herpen, fashion designer (Amsterdam); Keren Oxman, artist and
designer (NY); and W. Craig Carter (Department of Materials Science and Engineering,
MIT). Fabricated by Stratasys.

178. Beast

Neri Oxman
Beast is an organic-like entity created synthetically by the incorporation of physical
parameters into digital form-generation protocols. A single continuous surface, acting
both as structure and as skin, is locally modulated for both structural support and
corporeal aid. Beast combines structural, environmental, and corporeal performance by
adapting its thickness, pattern density, stiffness, flexibility, and translucency to load,
curvature, and skin-pressured areas respectively.

179. Bots of Babel

Neri Oxman, Jorge Duro-Royo, Markus Kayser, Jared Laucks and Laia
Mogas-Soldevila
The Biblical story of the Tower of Babel involved a deliberate plan hatched by mankind
to construct a platform from which man could fight God. The tower represented the first
documented attempt at constructing a vertical city. The divine response to the master

Page 36

October 2015

MIT Media Lab

plan was to sever communication by instilling a different language in each builder.


Tragically, the building's ultimate destruction came about through the breakdown of
communications between its fabricators. In this installation we redeem the Tower of
Babel by creating its antithesis. We will construct a virtuous, decentralized, yet highly
communicative building environment of cable-suspended fabrication bots that together
build structures bigger than themselves. We explore themes of asynchronous motion,
multi-nodal fabrication, lightweight additive manufacturing, and the emergence of form
through fabrication. (With contributions from Carlos Gonzalez Uribe and Dr. James
Weaver (WYSS Institute and Harvard University))

180. Building-Scale 3D
Printing

Neri Oxman, Steven Keating and John Klein

181. Carpal Skin

Neri Oxman

How can additive fabrication technologies be scaled to building-sized construction? We


introduce a novel method of mobile swarm printing that allows small robotic agents to
construct large structures. The robotic agents extrude a fast-curing material which
doubles as both a concrete mold for structural walls and as a thermal insulation layer.
This technique offers many benefits over traditional construction methods, such as
speed, custom geometry, and cost. As well, direct integration of building utilities such
as wiring and plumbing can be incorporated into the printing process. This research
was sponsored by the NSF EAGER award: Bio-Beams: FGM Digital Design &
Fabrication.

Carpal Skin is a prototype for a protective glove to protect against Carpal Tunnel
Syndrome, a medical condition in which the median nerve is compressed at the wrist,
leading to numbness, muscle atrophy, and weakness in the hand. Night-time wrist
splinting is the recommended treatment for most patients before going into carpal
tunnel release surgery. Carpal Skin is a process by which to map the pain-profile of a
particular patient--its intensity and duration--and to distribute hard and soft materials to
fit the patient's anatomical and physiological requirements, limiting movement in a
customized fashion. The form-generation process is inspired by animal coating patterns
in the control of stiffness variation.

182. CNSILK: Computer


Numerically
Controlled Silk
Cocoon Construction

Neri Oxman

183. Digitally
Reconfigurable
Surface

Neri Oxman and Benjamin Peters

MIT Media Lab

CNSILK explores the design and fabrication potential of silk fibers inspired by silkworm
cocoons for the construction of woven habitats. It explores a novel approach to the
design and fabrication of silk-based building skins by controlling the mechanical and
physical properties of spatial structures inherent in their microstructures using multi-axis
fabrication. The method offers construction without assembly, such that material
properties vary locally to accommodate for structural and environmental requirements.
This approach stands in contrast to functional assemblies and kinetically actuated
facades which require a great deal of energy to operate, and are typically maintained by
global control. Such material architectures could simultaneously bear structural load,
change their transparency so as to control light levels within a spatial compartment
(building or vehicle), and open and close embedded pores so as to ventilate a space.

The digitally reconfigurable surface is a pin matrix apparatus for directly creating rigid
3D surfaces from a computer-aided design (CAD) input. A digital design is uploaded
into the device, and a grid of thousands of tiny pins, much like the popular pin-art
toy are actuated to form the desired surface. A rubber sheet is held by vacuum
pressure onto the tops of the pins to smooth out the surface they form; this strong
surface can then be used for industrial forming operations, simple resin casting, and
many other applications. The novel phase-changing electronic clutch array allows the
device to have independent position control over thousands of discrete pins with only a
single motorized "push plate," lowering the complexity and manufacturing cost of this
type of device. Research is ongoing into new actuation techniques to further lower the
cost and increase the surface resolution of this technology.

October 2015

Page 37

184. FABRICOLOGY:
Variable-Property 3D
Printing as a Case for
Sustainable
Fabrication

Neri Oxman
Rapid prototyping technologies speed product design by facilitating visualization and
testing of prototypes. However, such machines are limited to using one material at a
time; even high-end 3D printers, which accommodate the deposition of multiple
materials, must do so discretely and not in mixtures. This project aims to build a
proof-of-concept of a 3D printer able to dynamically mix and vary the ratios of different
materials in order to produce a continuous gradient of material properties with real-time
correspondence to structural and environmental constraints.
Alumni Contributors: Mindy Eng, William J. Mitchell and Rachel Fong

185. FitSocket:
Measurement for
Attaching Objects to
People

Arthur Petron, Hugh Herr and Neri Oxman


A better understanding of the biomechanics of human tissue allows for better
attachment of load-bearing objects to people. Think of shoes, ski boots, car seats,
orthotics, and more. We are focusing on prosthetic sockets, the cup-shaped devices
that attach an amputated limb to a lower-limb prosthesis, which currently are made
through unscientific, artisanal methods that do not have repeatable quality and comfort
from one individual to the next. The FitSocket project aims to identify the correlation
between leg tissue properties and the design of a comfortable socket. The FitSocket is
a robotic socket measurement device that directly measures tissue properties. With
these data, we can rapid-prototype test sockets and socket molds in order to make
rigid, spatially variable stiffness, and spatially/temporally variable stiffness sockets.
Alumni Contributor: Elizabeth Tsai

186. Functionally Graded


Filament-Wound
Carbon-Fiber
Prosthetic Sockets

Neri Oxman, Carlos Gonzalez Uribe and Hugh Herr and the Biomechatronics
group

187. Gemini

Neri Oxman with Le Laboratoire (David Edwards, Founder), Stratasys, and SITU
Fabrication

Prosthetic Sockets belong to a family of orthoic devices designed for amputee


rehabilitation and performance augmentation. Although such products are fabricated
out of lightweight composite materials and designed for optimal shape and size, they
are limited in their capacity to offer local control of material properties for optimizing
load distribution and ergonomic fit over surface and volume areas. Our research offers
a novel workflow to enable the digital design and fabrication of customized prosthetic
sockets with variable impedance informed by MRI data. We implement parametric
environments to enable the controlled distribution of functional gradients of a
filament-wound carbon fiber socket.

Gemini is an acoustical "twin chaise" spanning multiple scales of human existence,


from the womb to the stretches of the Gemini zodiac. We are exploring interactions
between pairs: sonic and solar environments, natural and synthetic materials, hard and
soft sensations, and subtractive and additive fabrication. Made of two material
elements--a solid wood milled shell housing and an intricate cellular skin made of
sound-absorbing material--the chaise forms a semi-enclosed space surrounding the
human with a stimulation-free environment, recapitulating the ultimate quiet of the
womb. It is the first design to implement Stratasys' Connex3 technology using 44
materials with different pre-set mechanical combinations varying in rigidity, opacity, and
color as a function of geometrical, structural, and acoustical constraints. This calming
and still experience of being inside the chaise is an antidote to the stimuli-rich world in
which we live.

188. Glass Printing

Neri Oxman, Markus Kayser, John Klein, Chikara Inamura, Daniel Lizardo, Giorgia
Franchin, Michael Stern, Shreya Dave, Peter Houk, MIT Glass Lab
Digital design and construction technologies for product and building scale are
generally limited in their capacity to deliver multi-functional building skins. Recent
advancements in additive manufacturing and digital fabrication at large are today
enabling the fabrication of multiple materials with combinations of mechanical,

Page 38

October 2015

MIT Media Lab

electrical, and optical properties; however, most of these materials are non-structural
and cannot scale to architectural applications. Operating at the intersection of additive
manufacturing, biology, and architectural design, the Glass Printing project is an
enabling technology for optical glass 3D printing at architectural scale designed to
manufacture multi-functional glass structures and facade elements. The platform
deposits molten glass in a layer-by-layer (FDM) fashion, implementing numerical
control of tool paths, and it allows for controlled optical variation across surface and
volume areas.

189. Lichtenberg 3D
Printing

Neri Oxman and Steven Keating

190. Living Glass

Neri Oxman, Chikara Inamura, Daniel Lizardo, Michael Stern, Giorgia Franchin,
Shreya Dave, Pierre-Thomas Brun, Peter Houk, MIT Glass Lab

NEW LISTING

191. Living Mushtari


NEW LISTING

Generating 3D Lichtenberg structures in sintered media (i.e. glass) using electricity


offers a new approach to digital fabrication. By robotically controlling the electrodes, a
digital form can be rapidly fabricated with the benefits of a fine fractal structure. There
are numerous applications, ranging from chemical catalysts, to fractal antennas, to
product design.

The Living Glass project has its roots in concepts of metabolist design and fabrication.
Dynamic energy systems within permanent constructions have long been a goal for
architects, and our group includes the use of materials engineering in this pursuit. The
project is an enabling technology, infrastructure, and manifesto of the building block of
the future. Living Glass utilizes glass printing by integrating cross-disciplinary research
in material, mechanical, thermal, structural, and computational engineering to increase
the resolution and repeatability towards scalability. In conjunction with the development
of the next-generation manufacturing platform, computational design workflow will be
developed to integrate multi-objective optimization of structural and environmental
performance within the material system. The goal of the project is to manifest this
technology through the deployment of the functionally gradient high-fidelity glass
structure with internal vasculature that serves as the infrastructure of functional fluidics
to harness environmental energy.
Mediated Matter group: Neri Oxman (principal investigator), Will Patrick (project
lead), Steven Keating, and Sunanda Sharma; Stratasys; Christoph Bader and
Dominik Kolb; Prof. Pamela Silver and Stephanie Hays (Harvard Medical School);
and Dr. James Weaver
How can we design relationships between the most primitive and sophisticated life
forms? Can we design wearables embedded with synthetic microorganisms that can
enhance and augment biological functionality, and generate consumable energy when
exposed to the sun? We explored these questions through the creation of Mushtari, a
3D-printed wearable with 58 meters of internal fluid channels. Designed to function as a
microbial factory, Mushtari uses synthetic microorganisms to convert sunlight into
useful products for the wearer, engineering a symbiotic relationship between two
bacteria: photosynthetic cyanobacteria and E. coli. The cyanobacteria convert sunlight
to sucrose, and E. coli convert sucrose to useful products such as pigments, drugs,
food, fuel, and scents. This form of symbiosis, known as co-culture, is a phenomenon
commonly found in nature. Mushtari is part of the Wanderers collection, an
astrobiological exploration dedicated to medieval astronomers who explored worlds
beyond by visiting worlds within.

192. Meta-Mesh:
Computational Model
for Design and
Fabrication of
Biomimetic Scaled
Body Armors

MIT Media Lab

Neri Oxman, Jorge Duro-Royo, and Laia Mogas-Soldevila


A collaboration between Professor Christine Ortiz (project lead), Professor Mary C.
Boyce, Katia Zolotovsky, and Swati Varshaney (MIT). Operating at the intersection of
biomimetic design and additive manufacturing, this research proposes a computational
approach for designing multifunctional scaled-armors that offer structural protection and
flexibility in movement. Inspired by the segmented exoskeleton of Polypterus
senegalus, an ancient fish, we have developed a hierarchical computational model that
emulates structure-function relationships found in the biological exoskeleton. Our
research provides a methodology for the generation of biomimetic protective surfaces

October 2015

Page 39

using segmented, articulated components that maintain user mobility alongside


full-body coverage of doubly curved surfaces typical of the human body. The research
is supported by the MIT Institute for Soldier Nanotechnologies, the Institute for
Collaborative Biotechnologies, and the National Security Science and Engineering
Faculty Fellowship Program.

193. Micro-Macro Fluidic


Fabrication of a
Mid-Sole Running
Shoe

Neri Oxman and Carlos Gonzalez Uribe

194. Mobile Digital


Construction
Platform (MDCP)

Altec, BASF, Neri Oxman, Steven Keating, John Klein, Julian Leland and Nathan
Spielberg

195. Monocoque

Neri Oxman

Micro-Macro Fluidic Fabrication (MMFF) enables the control of mechanical properties


through the design of non-linear lattices embedded within multi-material matrices. At its
core it is a hybrid technique that integrates molding, casting, and macro-fluidics. Its
workflow allows for the fabrication of complex matrices with geometrical channels
injected with polymers of different pre-set mechanical combinations. This novel
fabrication technique is implemented in the design and fabrication of a midsole running
shoe. The goal is to passively tune material stiffness across surface area in order to
absorb the impact force of the user's body weight relative to the ground, and enhance
the direction of the foot-strike impulse force relative to the center of body mass.

The MDCP is an in-progress research project consisting of a compound robotic arm


system. The system comprises a 6-axis KUKA robotic arm attached to the endpoint of a
3-axis Altec hydraulic boom arm, which is mounted on a mobile platform. Akin to the
biological model of the human shoulder and hand, this compound system utilizes the
large boom arm for gross positioning and the small robotic arm for fine positioning and
oscillation correction, respectively. Potential applications include fabrication of
non-standard architectural forms, integration of real-time on-site sensing data,
improvements in construction efficiency, enhanced resolution, lower error rates, and
increased safety.

French for "single shell," Monocoque stands for a construction technique that supports
structural load using an object's external skin. Contrary to the traditional design of
building skins that distinguish between internal structural frameworks and non-bearing
skin elements, this approach promotes heterogeneity and differentiation of material
properties. The project demonstrates the notion of a structural skin using a Voronoi
pattern, the density of which corresponds to multi-scalar loading conditions. The
distribution of shear-stress lines and surface pressure is embodied in the allocation and
relative thickness of the vein-like elements built into the skin. Its innovative 3D printing
technology provides for the ability to print parts and assemblies made of multiple
materials within a single build, as well as to create composite materials that present
preset combinations of mechanical properties.

196. PCB Origami

Neri Oxman and Yoav Sterman


The PCB Origami project is an innovative concept for printing digital materials and
creating 3D objects with Rigid-flex PCBs and pick-and-place machines. These
machines allow printing of digital electronic materials, while controlling the location and
property of each of the components printed. By combining this technology with
Rigid-flex PCB and computational origami, it is possible to create from a single sheet of
PCB almost any 3D shape that is already embedded with electronics, to produce a
finished product with that will be both structural and functional.

197. Printing Living


Materials

Neri Oxman, Will Patrick, Sunanda Sharma, Steven Keating, Steph Hays,
Elonore Tham, Professor Pam Silver, and Professor Tim Lu
How can biological organisms be incorporated into product, fashion, and architectural
design to enable the generation of multi-functional, responsive, and highly adaptable
objects? This research pursues the intersection of synthetic biology, digital fabrication,
and design. Our goal is to incorporate engineered biological organisms into inorganic
and organic materials to vary material properties in space and time. We aim to use

Page 40

October 2015

MIT Media Lab

synthetic biology to engineer organisms with varied output functionalities and digital
fabrication tools to pattern these organisms and induce their specific capabilities with
spatiotemporal precision.

198. Printing
Multi-Material 3D
Microfluidics

Neri Oxman, Steven Keating, Will Patrick and David Sun Kong (MIT Lincoln
Laboratory)

199. Rapid Craft

Neri Oxman

Computation and fabrication in biology occur in aqueous environments. Through


on-chip mixing, analysis, and fabrication, microfluidic chips have introduced new
possibilities in biology for over two decades. Existing construction processes for
microfluidics use complex, cumbersome, and expensive lithography methods that
produce single-material, multi-layered 2D chips. Multi-material 3D printing presents a
promising alternative to existing methods that would allow microfluidics to be fabricated
in a single step with functionally graded material properties. We aim to create
multi-material microfluidic devices using additive manufacturing to replicate current
devices, such as valves and ring mixers, and to explore new possibilities enabled by 3D
geometries and functionally graded materials. Applications range from medicine to
genetic engineering to product design.

The values endorsed by vernacular architecture have traditionally promoted designs


constructed and informed by and for the environment, while using local knowledge and
indigenous materials. Under the imperatives and growing recognition of sustainable
design, Rapid Craft seeks integration between local construction techniques and
globally available digital design technologies to preserve, revive, and reshape these
cultural traditions.

200. Raycounting

Neri Oxman
Raycounting is a method for generating customized light-shading constructions by
registering the intensity and orientation of light rays within a given environment. 3D
surfaces of double curvature are the result of assigning light parameters to flat planes.
The algorithm calculates the intensity, position, and direction of one or multiple light
sources placed in a given environment, and assigns local curvature values to each
point in space corresponding to the reference plane and the light dimension. Light
performance analysis tools are reconstructed programmatically to allow for
morphological synthesis based on intensity, frequency, and polarization of light
parameters as defined by the user.

201. Silk Pavilion

Neri Oxman, Jorge Duro-Royo, Carlos Gonzalez, Markus Kayser, and Jared
Laucks, with James Weaver (Wyss Institute, Harvard University) and Fiorenzo
Omenetto (Tufts University)
The Silk Pavilion explores the relationship between digital and biological fabrication.
The primary structure was created from 26 polygonal panels made of silk threads laid
down by a CNC (Computer-Numerically Controlled) machine. Inspired by the silkworm's
ability to generate a 3D cocoon out of a single multi-property silk thread, the pavilion's
overall geometry was created using an algorithm that assigns a single continuous
thread across patches, providing various degrees of density. Overall density variation
was informed by deploying the silkworm as a biological "printer" in the creation of a
secondary structure. Positioned at the bottom rim of the scaffold, 6,500 silkworms spun
flat, non-woven silk patches as they locally reinforced the gaps across CNC-deposited
silk fibers. Affected by spatial and environmental conditions (geometrical density,
variation in natural light and heat), the silkworms were found to migrate to darker and
denser areas.

202. SpiderBot

Neri Oxman and Benjamin Peters


The SpiderBot is a suspended robotic gantry system that provides an easily deployable
platform from which to print large structures. The body is composed of a deposition
nozzle, a reservoir of material, and parallel linear actuators. The robot is connected to
stable points high in the environment, such as large trees or buildings. This

MIT Media Lab

October 2015

Page 41

arrangement is capable of moving large distances without the need for more
conventional linear guides, much like a spider does. The system is easy to set up for
mobile projects, and will afford sufficient printing resolution and build volume.
Expanding foam can be deposited to create a building-scale printed object rapidly.
Another material type of interest is the extrusion or spinning of tension elements, like
rope or cable. With tension elements, unique structures such as bridges or webs can be
wrapped, woven, or strung around environmental features or previously printed
materials.

203. Water-Based Additive


Manufacturing

Neri Oxman, Jorge Duro-Royo, and Laia Mogas-Soldevila, in collaboration with


Dr. Javier G. Fernandez (Wyss Institute, Harvard University)
This research presents water-based robotic fabrication as a design approach and
enabling technology for additive manufacturing (AM) of biodegradable hydrogel
composites. We focus on expanding the dimensions of the fabrication envelope,
developing structural materials for additive deposition, incorporating material-property
gradients, and manufacturing architectural-scale biodegradable systems. The
technology includes a robotically controlled AM system to produce biodegradable
composite objects, combining natural hydrogels with other organic aggregates. It
demonstrates the approach by designing, building, and evaluating the mechanics and
controls of a multi-chamber extrusion system. Finally, it provides evidence of
large-scale composite objects fabricated by our technology that display graded
properties and feature sizes ranging from micro- to macro-scale. Fabricated objects
may be chemically stabilized or dissolved in water and recycled within minutes.
Applications include the fabrication of fully recyclable products or temporary
architectural components, such as tent structures with graded mechanical and optical
properties.

Sputniko!: Design Fiction


Sparking imagination and discussion about the social, cultural, and ethical implications of
new technologies through design and storytelling.

204. (Im)possible Baby

Ai Hasegawa, Sputniko! and Asako Makimura


Delivering a baby from same-sex parents is not a sci-fi dream anymore; recent
developments in genetics and stem cell research have made this dream much closer to
reality. Is creating a baby from same-sex parents the right thing to do? Who has the
right to decide this, and how? This project explores the bioethics of producing babies
for same-sex couples. In the first phase, DNA data will be simulated to visualize the
"potential baby." The project will then explore creating partial organs of the "potential
baby" over the next few years. You have the right to know, think, and raise your voice
about whether this dream becomes a reality, not just the authorities and researchers.

205. CremateBot:
Transform, Reborn,
Free

Page 42

Sputniko! and Dan Chen


CremateBot is an apparatus that takes in human-body samples such as fingernails,
hair, or dead skin and turns them into ashes through the cremation process. The
process of converting human remains to ashes becomes a critical experience for
observers, causing witnesses to question their sense of existence and physical self
through the conversion process. CremateBot transforms our physical self and
celebrates our rebirth through self-regeneration. The transformation and rebirth open
our imagination to go beyond our physical self and cross the span of time. Similar to
Theseus' paradox, the dead human cells which at one point were considered part of our
physical selves and helped to define our sense of existence are continually replaced
with newly generated cells. With recent advancements in implants, biomechatronics,
and bioengineered organs, how we define ourselves is increasingly blurred.

October 2015

MIT Media Lab

206. Nostalgic Touch

Sputniko! and Dan Chen


Nostalgic Touch proposes a new ritual for remembering the deceased in the digital and
multicultural age. It is an apparatus that captures hand motions and attempts to
replicate the sensation of intimacy or affection by playing back the comforting gestures.
It stores gesture data of the people you cared about, then plays them back after they
are gone. Similar to rituals in all religions, it gives us a sense of comfort in coping with
the death. People in Japan, Singapore, and China live with high standards of
technology, but many embrace religious rituals and superstitions as an important part of
their wellbeing and decision-making. Nostalgic Touch explores how emerging
technologies could be used to enrich the experience of these rituals. How could we
augment these rituals to give an even better sense of comfort and intimacy?

207. Open Source


Estrogen:
Housewives Making
Drugs
NEW LISTING

208. Pop Roach

Mary Tsang
Open Source Estrogen: Housewives Making Drugs combines do-it-yourself science,
body and gender politics, and ethics of hormonal manipulation. The goal of the project
is to create an open-source protocol for estrogen biosynthesis. The kitchen is a
politically charged space prescribed to women as their proper dwelling, therefore
making it the precise context to perform an estrogen synthesis recipe. With recent
developments in the field of synthetic biology, the customized kitchen laboratory may
be a ubiquitous possibility in the near future. Open-access estrogen would allow women
and transgender females to exercise greater control over their bodies by circumventing
governments and institutions. We want to ask: What are the biopolitics governing our
bodies? More importantly, is it ethical to self-administer self-synthesized hormones?
Ai Hasegawa and Sputniko!
Facing issues of food crisis by overpopulation, this project explores a possible future
where a small community of activists arises to design an edible cockroach that can
survive in harsh environments. These genetically modified roaches are designed to
pass their genes to the next generations; thus the awful black and brown roaches will
be pushed to extinction by the newly designed, cute, colorful, tasty, and highly
nutritional "pop roach." The color of these "pop roaches" corresponds to a different
flavor, nutrition, and function, while the original ones remain black or brown, and not
recommended to be eaten. How will genetic engineering shift our perception of food
and eating habits? Pop Roach explores how we can expand our perception of cuisine
to solve some of the world's most pressing problems.

209. Teshima Bio-Art


Pavilion

Sputniko! and Fukutake Foundation

210. Tranceflora: Amy's


Glowing Silk

Sputniko!

MIT Media Lab

We are currently working on designing and creating a new Bio-Art Pavilion in Teshima,
an art-island which is part of Benesse Art Site at Naoshima, Japan. The pavilion will
include a permanent exhibition space by Sputniko! / Design Fiction group, and will host
open workshops and lectures in the lab space for young children to experiment,
discuss, and imagine the implications of emerging biotechnologies. The project is
commissioned for Setouchi Art Triennale Festival in 2016.

Biotechnology is changing nature, and with it, beauty. How will we, and society as a
whole, respond to these changes? Aphrodite, the ancient Greek goddess of love,
beauty, and procreation, is said to have been born from the ocean and wrapped in the
smell of roses, and captivated the gods of Olympus with her beauty. Our heroine Amy
turns herself into a modern biotech Aphrodite when she creates an irresistible
rose-scented dress from genetically engineered silk to captivate her secret crush's
heart. Her story is not entirely science fiction: we have created Amy's dress using
glowing silk, created in 2008 by the National Institute of Agrobiological Sciences (NIAS)
in Japan by incorporating jellyfish and coral genes into silkworms. We are also working
with NIAS to develop a rose-scented rose silk and a "love-inducing" silk containing
oxytocin, both engineered by Amy in the story.

October 2015

Page 43

211. Virgin Birth Simulator

Ai Hasegawa and Sputniko!


This project proposes a possible future where people could choose to create a baby
without a partner, solely on their own, by exploring the implications of recent research
advancements such as the induced pluripotent stem cells (iPS Cells).

Joseph Paradiso: Responsive Environments


Augmenting and mediating human experience, interaction, and perception with sensor
networks.

212. #QL: Hashtag Query


Language

Donald Derek H.

213. Chain API

Joseph A. Paradiso, Gershon Dublon, Brian Mayton and Spencer Russell

#QL is a manifestation of the convergence of social platforms and sensor networks.


Nodes are queried by including special hashtags in a user's status update. In return,
#QL's social bot, which is constantly listening to these hashtags, parses the sensor
data from Chain-API and replies back to the user via the same social medium.

RESTful services and the Web provide a framework and structure for content delivery
that is scalable, not only in size but, more importantly, in use cases. As we in
Responsive Environments build systems to collect, process, and deliver sensor data,
this project serves as a research platform that can be shared between a variety of
projects both inside and outside the group. By leveraging hyperlinks between sensor
data clients can browse, explore, and discover their relationships and interactions in
ways that can grow over time.

214. Circuit Stickers

Joseph A. Paradiso, Jie Qi, Nan-wei Gong and Leah Buechley


Circuit Stickers is a toolkit for crafting electronics using flexible and sticky electronic
pieces. These stickers are created by printing traces on flexible substrates and adding
conductive adhesive. These lightweight, flexible, and sticky circuit boards allow us to
begin sticking interactivity onto new spaces and interfaces such as clothing,
instruments, buildings, and even our bodies.

215. Circuit Stickers


Activity Book

Leah Buechley and Jie Qi

216. Circuit Storybook

Joseph A. Paradiso, Kevin Slavin, Jie Qi and Sonja de Boer

NEW LISTING

Page 44

The Circuit Sticker Activity Book is a primer for using circuit stickers to create
expressive electronics. Inside are explanations of the stickers, and circuits and
templates for building functional electronics directly on the pages of the book. The book
covers five topics, from simple LED circuits to crafting switches and sensors. As users
complete the circuits, they are also prompted with craft and drawing activities to ensure
an expressive and artistic approach to learning and building circuits. Once completed,
the book serves as an encyclopedia of techniques to apply to future projects.

An interactive picture book that explores storytelling techniques through paper-based


circuitry. Sensors, lights, and microcontrollers embedded into the covers, spine, and
pages of the book add electronic interactivity to the traditional physical picture book,
allowing us to tell new stories in new ways. The current book, "Ellie," tells the
adventures of an LED light named Ellie who dreams of becoming a star, and of her
journey up to the sky.

October 2015

MIT Media Lab

217. disCERN:
Sonification Platform
for High-Energy
Physics Data

Joseph A. Paradiso and Juliana Cherston

218. DoppelLab:
Experiencing
Multimodal Sensor
Data

Joe Paradiso, Gershon Dublon and Brian Dean Mayton

219. Experiential Lighting:


New User-Interfaces
for Lighting Control

Joseph A. Paradiso, Matthew Aldrich and Nan Zhao

220. FingerSynth:
Wearable
Transducers for
Exploring the
Environment through
Sound

Joseph A. Paradiso and Gershon Dublon

MIT Media Lab

Inspired by previous work in the field of sonification, we are building a data-driven


composition platform that will enable users to map collision event information from
experiments in high-energy physics to audio properties. In its initial stages, the tool will
be used for outreach purposes, allowing physicists and composers to interact with
collision data through novel interfaces. Our longer-term goal is to develop strategic
mappings that facilitate the auditory perception of hidden regularities in high
dimensional datasets and thus evolve into a useful analysis tool for physicists as well,
possibly for the purpose of monitoring slow control data in experiment control rooms.
The project includes a website with real-time audio streams and basic event data,
which is not yet public.

Homes and offices are being filled with sensor networks to answer specific queries and
solve pre-determined problems, but no comprehensive visualization tools exist for
fusing these disparate data to examine relationships across spaces and sensing
modalities. DoppelLab is a cross-reality virtual environment that represents the
multimodal sensor data produced by a building and its inhabitants. Our system
encompasses a set of tools for parsing, databasing, visualizing, and sonifying these
data; by organizing data by the space from which they originate, DoppelLab provides a
platform to make both broad and specific queries about the activities, systems, and
relationships in a complex, sensor-rich environment.

We are evaluating new methods of interacting and controlling solid-state lighting based
on our findings of how participants experience and perceive architectural lighting in our
new lighting laboratory (E14-548S). This work, aptly named "Experiential Lighting,"
reduces the complexity of modern lighting controls (intensity/color/space) into a simple
mapping, aided by both human input and sensor measurement. We believe our
approach extends beyond general lighting control and is applicable in situations where
human-based rankings and preference are critical requirements for control and
actuation. We expect our foundational studies to guide future camera-based systems
that will inevitably incorporate context in their operation (e.g., Google Glass).

The FingerSynth is a wearable musical instrument made up of a bracelet and set of


rings that enables its players to produce sound by touching nearly any surface in their
environments. Each ring contains a small, independently controlled audio exciter
transducer. The rings sound loudly when they touch a hard object, and are silent
otherwise. When a wearer touches their own (or someone else's) head, the contacted
person hears sound through bone conduction, inaudible to others. A microcontroller
generates a separate audio signal for each ring, and can take user input through an
accelerometer in the form of taps, flicks, and other gestures. The player controls the
envelope and timbre of the sound by varying the physical pressure and the angle of
their finger on the surface, or by touching differently resonant surfaces. The
FingerSynth encourages players to experiment with the materials around them and with
one another.

October 2015

Page 45

221. Hacking the


Sketchbook

Joseph A. Paradiso and Jie Qi

222. Halo: Wearable


Lighting

Joseph A. Paradiso and Nan Zhao

NEW LISTING

In this project we investigate how the process of building a circuit can be made more
organic, like sketching in a sketchbook. We integrate a rechargeable power supply into
the spine of a traditional sketchbook, so that each page of the sketchbook has power
connections. This enables users to begin creating functioning circuits directly onto the
pages of the book and to annotate as they would in a regular notebook. The sequential
nature of the sketchbook allows creators to document their process for circuit design.
The book also serves as a single physical archive of various hardware designs. Finally,
the portable and rechargeable nature of the book allows users to take their electronic
prototypes off of the lab bench and share their creations with people outside of the lab
environment.

Imagine a future where lights are not fixed to the ceiling, but follow us wherever we are.
In this colorful world we enjoy lighting that is designed to go along with the moment, the
activity, our feelings, and our outfits. Halo is a wearable lighting device created to
explore this scenario. Different from architectural lighting, this personal lighting device
aims to illuminate and present its user. Halo changes the wearer's appearance with the
ease of a button click, similar to adding a filter to a photograph. It can also change the
user's view of the world, brightening up a rainy day or coloring a gray landscape. Halo
can react to activities and adapt based on context. It is a responsive window between
the wearer and his or her surroundings.

223. HearThere:
Ubiquitous Sonic
Overlay

Joseph A. Paradiso, Gershon Dublon and Spencer Russell

224. ListenTree:
Audio-Haptic Display
in the Natural
Environment

V. Michael Bove, Joseph A. Paradiso, Gershon Dublon and Edwina Portocarrero

225. Living Observatory:


Sensor Networks for
Documenting and
Experiencing Ecology

Glorianna Davenport, Joe Paradiso, Gershon Dublon, Pragun Goyal and Brian
Dean Mayton

Page 46

With our Ubiquitous Sonic Overlay, we are working to place virtual sounds in the user's
environment, fixing them in space even as the user moves. We are working toward
creating a seamless auditory display, indistinguishable from the user's actual
surroundings. Between bone-conduction headphones, small and cheap orientation
sensors, and ubiquitous GPS, a confluence of fundamental technologies is in place.
However, existing head-tracking systems either limit the motion space to a small area
(e.g., Occulus Rift), or sacrifice precision for scale using technologies like GPS. We are
seeking to bridge the gap to create large outdoor spaces of sonic objects.

ListenTree is an audio-haptic display embedded in the natural environment. Visitors to


our installation notice a faint sound emerging from a tree. By resting their heads against
the tree, they are able to hear sound through bone conduction. To create this effect, an
audio exciter transducer is weatherproofed and attached to the tree's roots,
transforming it into a living speaker, channeling audio through its branches, and
providing vibrotactile feedback. In one deployment, we used ListenTree to display live
sound from an outdoor ecological monitoring sensor network, bringing a faraway
wetland into the urban landscape. Our intervention is motivated by a need for forms of
display that fade into the background, inviting attention rather than requiring it. We
consume most digital information through devices that alienate us from our
surroundings; ListenTree points to a future where digital information might become
enmeshed in material.

Living Observatory is an initiative for documenting and interpreting ecological change


that will allow people, individually and collectively, to better understand relationships
between ecological processes, human lifestyle choices, and climate change adaptation.
As part of this initiative, we are developing sensor networks that document ecological
processes and allow people to experience the data at different spatial and temporal
scales. Low-power sensor nodes capture climate and other data at a high
spatiotemporal resolution, while others stream audio. Sensors on trees measure
transpiration and other cycles, while fiber-optic cables in streams capture
high-resolution temperature data. At the same time, we are developing tools that allow

October 2015

MIT Media Lab

people to explore this data, both remotely and onsite. The remote interface allows for
immersive 3D exploration of the terrain, while visitors to the site will be able to access
data from the network around them directly from wearable devices.

226. Mindful Photons:


Context-Aware
Lighting

Joseph A. Paradiso, Akane Sano and Nan Zhao

227. MMODM: Massively


Multiplayer Online
Drum Machine

Joseph A. Paradiso, Tod Machover, Donald Derek H. and Basheer Tome

228. Mobile, Wearable


Sensor Data
Visualization

Joseph A. Paradiso, Gershon Dublon, Brian Mayton, Spencer Russell and Donald
Derek H.

229. NailO

Cindy Hsin-Liu Kao, Artem Dementyev, Joe Paradiso, Chris Schmandt

Light enables our visual perception. It is the most common medium for displaying digital
information. Light regulates our circadian rhythms, affects productivity and social
interaction, and makes people feel safe. Yet despite the significance of light in
structuring human relationships with their environments on all these levels, we
communicate very little with our artificial lighting systems. Occupancy, ambient
illuminance, intensity, and color preferences are the only input signals currently
provided to these systems. With advanced sensing technology, we can establish better
communication with our devices. This effort is often described as context-awareness.
Context has typically been divided into properties such as location, identity, affective
state, and activity. Using wearable and infrastructure sensors, we are interested in
detecting these properties and using them to control lighting. The Mindful Photons
Project aims to close the loop and allow our light sources to "see" us.

MMODM is an online drum machine based on the Twitter streaming API, using tweets
from around the world to create and perform musical sequences together in real time.
Users anywhere can express 16-beat note sequences across 26 different instruments,
using plain-text tweets from any device. Meanwhile, users on the site itself can use the
graphical interface to locally DJ the rhythm, filters, and sequence blending. By
harnessing this duo of website and Twitter network, MMODM enables a whole new
scale of synchronous musical collaboration between users locally, remotely, across a
wide variety of computing devices, and across a variety of cultures.

As part of the Living Observatory ecological sensing initiative, we've been developing
new approaches to mobile, wearable sensor data visualization. The Tidmarsh app for
Google Glass visualizes real-time sensor network data based on the wearer's location
and gaze. A user can approach a sensor node to see 2D plots of its real-time data
stream, and look across an expanse to see 3D plots encompassing multiple devices.
On the back-end, the app showcases our Chain API, crawling linked data resources to
build a dynamic picture of the sensor network. Besides development of new
visualizations, we are building in support for voice queries, and exploring ways to
encourage distributed data collection by users.

NailO is a nail-mounted gestural input surface inspired by commercial nail stickers.


Using capacitive sensing on printed electrodes, the interface can distinguish on-nail
finger swipe gestures with high accuracy (>92 percent). NailO works in real time: the
system is miniaturized to fit on the fingernail, while wirelessly transmitting the sensor
data to a mobile phone or PC. NailO allows for one-handed and always-available input,
while being unobtrusive and discrete. The device blends into the user's body, is
customizable, fashionable, and even removable.

230. Prosthetic Sensor


Networks: Factoring
Attention,
Proprioception, and
Sensory Coding

MIT Media Lab

Joseph A. Paradiso and Gershon Dublon


Sensor networks permeate our built and natural environments, but our means for
interfacing to the resultant data streams have not evolved much beyond HCI and
information visualization. Researchers have long experimented with wearable sensors
and actuators on the body as assistive devices. A user's neuroplasticity can, under
certain conditions, transcend sensory substitution to enable perceptual-level cognition
of "extrasensory" stimuli delivered through existing sensory channels. But there
remains a huge gap between data and human sensory experience. We are exploring
the space between sensor networks and human augmentation, in which distributed

October 2015

Page 47

sensors become sensory prostheses. In contrast, user interfaces are substantially


unincorporated by the body, our relationship to them never fully pre-attentive. Attention
and proprioception are key, not only to moderate and direct stimuli, but also to enable
users to move through the world naturally, attending to the sensory modalities relevant
to their specific contexts.

231. SensorChimes:
Musical Mapping for
Sensor Networks

Joseph A. Paradiso and Evan Lynch


SensorChimes aims to create a new canvas for artists leveraging ubiquitous sensing
and data collection. Real-time data from environmental sensor networks are realized as
musical composition. Physical processes are manifested as musical ideas, with the
dual goal of making meaningful music and rendering an ambient display. The Tidmarsh
Living Observatory initiative, which aims to document the transformation of a reclaimed
cranberry bog, provides an opportunity to explore data-driven musical composition
based on a large-scale environmental sensor network. The data collected from
Tidmarsh are piped into a mapping framework, which a composer configures to
produce music driven by the data.

Alex 'Sandy' Pentland: Human Dynamics


Exploring how social networks can influence our lives in business, health, governance, and
technology adoption and diffusion.

232. bandicoot: A Python


Toolbox for Mobile
Phone Metadata

Yves-Alexandre de Montjoye, Lur Rocher, and Alex 'Sandy' Pentland

233. Data-Pop Alliance

Alex 'Sandy' Pentland, Harvard Humanitarian Initiative and Overseas


Development Institute

bandicoot provides a complete, easy-to-use environment for researchers using mobile


phone metadata. It allows them to easily load their data, perform analysis, and export
their results with a few lines of code. It computes 100+ standardized metrics in three
categories: individual (number of calls, text response rate), spatial (radius of gyration,
entropy of places), and social network (clustering coefficient, assortativity). The toolbox
is easy to extend and contains extensive documentation with guides and examples.

Data-Pop Alliance is a joint initiative on big data and development with a goal of helping
to craft and leverage the new ecosystem of big data--new personal data, new tools,
new actors--to improve decisions and empower people in a way that avoids the pitfalls
of a new digital divide, de-humanization, and de-democratization. Data-Pop Alliance
aims to serve as a designer, broker, and implementer of ideas and activities, bringing
together institutions and individuals around common principles and objectives through
collaborative research, training and capacity building, technical assistance, convening,
knowledge curation, and advocacy. Our thematic areas of focus include official
statistics, socio-economic and demographic methods, conflict and crime, climate
change and environment, literacy, and ethics.

234. Enigma
NEW LISTING

Page 48

Guy Zyskind, Oz Nathan and Alex 'Sandy' Pentland


A peer-to-peer network, enabling different parties to jointly store and run computations
on data while keeping the data completely private. Enigma's computational model is
based on a highly optimized version of secure multi-party computation, guaranteed by a
verifiable secret-sharing scheme. For storage, we use a modified distributed hashtable
for holding secret-shared data. An external blockchain is utilized as the controller of the
network, manages access control and identities, and serves as a tamper-proof log of
events. Security deposits and fees incentivize operation, correctness, and fairness of
the system. Similar to Bitcoin, Enigma removes the need for a trusted third party,
enabling autonomous control of personal data. For the first time, users are able to
share their data with cryptographic guarantees regarding their privacy.

October 2015

MIT Media Lab

235. Inducing Peer


Pressure to Promote
Cooperation

Erez Shmueli, Alex 'Sandy' Pentland, Dhaval Adjodah and David Shrier

236. Leveraging
Leadership Expertise
More Effectively in
Organizations

Alex 'Sandy' Pentland, Dhaval Adjodah and Alejandro Noriega Campero

NEW LISTING

237. Mobile Territorial Lab

Cooperation in a large society of self-interested individuals is notoriously difficult to


achieve when the externality of one individual's action is spread thin and wide. This
leads to the "tragedy of the commons," with rational action ultimately making everyone
worse off. Traditional policies to promote cooperation involve Pigouvian taxation or
subsidies that make individuals internalize the externality they incur. We introduce a
new approach to achieving global cooperation by localizing externalities to one's peers
in a social network, thus leveraging the power of peer pressure to regulate behavior.
The mechanism relies on a joint model of externalities and peer-pressure. Surprisingly,
this mechanism can require a lower budget to operate than the Pigouvian mechanism,
even when accounting for the social cost of peer pressure. Even when the available
budget is very low, the social mechanisms achieve greater improvement in the
outcome.

We believe that the narrative of only listening to experts or trusting the wisdom of the
crowd blindly is flawed. Instead we have developed a system that weighs experts and
lay-people differently and dynamically and show that a good balance is required. We
show that our methodology leads to a 15 percent improvement in mean performance,
15 percent decrease in variance, and almost 30 percent increase in Sharpe-type ratio in
a real online market.
Alex 'Sandy' Pentland, Bruno Lepri and David Shrier
The Mobile Territorial Lab (MTL) aims at creating a living laboratory integrated in the
real life of the Trento territory in Italy, open to manifold kinds of experimentations. In
particular, the MTL is focused on exploiting the sensing capabilities of mobile phones to
track and understand human behaviors (e.g., families' spending behaviors, lifestyles,
mood, and stress patterns); on designing and testing social strategies aimed at
empowering individual and collective lifestyles through attitude and behavior change;
and on investigating new paradigms in personal data management and sharing. This
project is a collaboration with Telecom Italia SKIL Lab, Foundation Bruno Kessler, and
Telefonica I+D.

238. On the
Reidentifiability of
Credit Card Metadata

Yves-Alexandre de Montjoye, Laura Radaelli, Vivek Kumar Singh, Alex 'Sandy'


Pentland

239. openPDS/
SaferAnswers:
Protecting the
Privacy of Metadata

Alex 'Sandy' Pentland, Brian Sweatt, Erez Shmueli, and Yves-Alexandre de


Montjoye

MIT Media Lab

Even when real names and other personal information are stripped from metadata
datasets, it is often possible to use just a few pieces of information to identify a specific
person. Here, we study three months of credit card records for 1.1 million people and
show that four spatiotemporal points are enough to uniquely reidentify 90 percent of
individuals. We show that knowing the price of a transaction increases the risk of
reidentification by 22 percent, on average. Finally, we show that even data sets that
provide coarse information at any or all of the dimensions provide little anonymity, and
that women are more reidentifiable than men in credit card metadata.

In a world where sensors, data storage, and processing power are too cheap to meter,
how do you ensure that users can realize the full value of their data while protecting
their privacy? openPDS is a field-tested, personal metadata management framework
that allows individuals to collect, store, and give fine-grained access to their metadata
to third parties. SafeAnswers is a new and practical way of protecting the privacy of
metadata at an individual level. SafeAnswers turns a hard anonymization problem into
a more tractable security one. It allows services to ask questions whose answers are
calculated against the metadata, instead of trying to anonymize individuals' metadata.
Together, openPDS and SafeAnswers provide a new way of dynamically protecting
personal metadata.

October 2015

Page 49

240. Prediction Markets:


Leveraging Internal
Knowledge to Beat
Industry Prediction
Experts

Alex 'Sandy' Pentland, Dhaval Adjodah and Alejandro Noriega

241. Sensible
Organizations

Alex 'Sandy' Pentland, Benjamin Waber and Daniel Olguin Olguin

242. The Privacy Bounds


of Human Mobility

Cesar A. Hidalgo and Yves-Alexandre DeMontjoye

Markets are notorious for bubbles and bursts. Other research has found that crowds of
lay-people can replace even leading experts to predict everything from product sales to
the next big diplomatic event. In this project, we leverage both threads of research to
see how prediction markets can be used to predict business and technological
innovations, and use them as a model to fix financial bubbles. For example, a prediction
market was rolled out inside of Intel and the experiment was very successful, and led to
better predictions than the official Intel forecast 75 percent of the time. Prediction
markets also led to as much as a 25 percent reduction in mean squared error over the
prediction of official experts at Google, Ford, and Koch industries.

Data mining of email has provided important insights into how organizations function
and what management practices lead to greater productivity. But important
communications are almost always face-to-face, so we are missing the greater part of
the picture. Today, however, people carry cell phones and wear RFID badges. These
body-worn sensor networks mean that we can potentially know who talks to whom, and
even how they talk to each other. Sensible Organizations investigates how these new
technologies for sensing human interaction can be used to reinvent organizations and
management.

We used 15 months of data from 1.5 million people to show that four
points--approximate places and times--are enough to identify 95 percent of individuals
in a mobility database. Our work shows that human behavior puts fundamental natural
constraints on the privacy of individuals, and these constraints hold even when the
resolution of the dataset is low. These results demonstrate that even coarse datasets
provide little anonymity. We further developed a formula to estimate the uniqueness of
human mobility traces. These findings have important implications for the design of
frameworks and institutions dedicated to protecting the privacy of individuals.

Rosalind W. Picard: Affective Computing


Advancing wellbeing using new ways to communicate, understand, and respond to
emotion.

243. Affective Response


to Haptic signals
NEW LISTING

Page 50

Grace Leslie, Rosalind Picard, Simon Lui, Suranga Nanayakkara


This study attempts to examine humans' affective responses to superimposed
sinusoidal signals. These signals can be perceived either through sound, in the case of
electronically synthesized musical notes, or through vibro-tactile stimulation, in the case
of vibrations produced by vibrotactile actuators. This study is concerned with the
perception of superimposed vibrations, whereby two or more sinusoisal signals are
perceived simultaneously, producing a perceptual impression that is substantially
different than of each signal alone, owing to the interactions between perceived
sinusoidal vibrations that give rise to a unified percept of a sinusoidal chord. The theory
of interval affect was derived from systematic analyses of Indian, Chinese, Greek, and
Arabic music theory and tradition, and proposes a universal organization of affective
response to intervals organized using a multidimensional system. We hypothesize that
this interval affect system is multi-modal and will transfer to the vibrotactile domain.

October 2015

MIT Media Lab

244. An EEG and


Motion-Capture
Based Expressive
Music Interface for
Affective
Neurofeedback

Grace Leslie, Rosalind Picard, and Simon Lui

245. Automated Tongue


Analysis

Javier Hernandez Rivera, Weixuan 'Vincent' Chen, Akane Sano, and Rosalind W.
Picard

NEW LISTING

This project examines how the expression granted by new musical interfaces can be
harnessed to create positive changes in health and wellbeing. We are conducting
experiments to measure EEG dynamics and physical movements performed by
participants who are using software designed to invite physical and musical expression
of the basic emotions. The present demonstration of this system incorporates an
expressive gesture sonification system using a Leap Motion device, paired with an
ambient music engine controlled by EEG-based affective indices. Our intention is to
better understand affective engagement, by creating both a new musical interface to
invite it, and a method to measure and monitor it. We are exploring the use of this
device and protocol in therapeutic settings in which mood recognition and regulation
are a primary goal.

A common practice in Traditional Chinese Medicine (TCM) is visual examination of the


patient's tongue. This study will examine ways to make this process more objective and
to test its efficacy for understanding stress- and health-related changes in people over
time. We start by developing an app that makes it comfortable and easy for people to
collect tongue data in daily life together with other stress- and health-related
information. We will obtain assessment from expert practitioners of TCM, and also use
state-of-the art pattern analysis and machine learning to attempt to create
state-of-the-art algorithms able to help provide better insights for health and prevention
of sickness.

246. Automatic Stress


Recognition in
Real-Life Settings

Rosalind W. Picard, Robert Randall Morris and Javier Hernandez Rivera

247. Autonomic Nervous


System Activity in
Epilepsy

Rosalind W. Picard and Ming-Zher Poh

MIT Media Lab

Technologies to automatically recognize stress are extremely important to prevent


chronic psychological stress and pathophysiological risks associated with it. The
introduction of comfortable and wearable biosensors has created new opportunities to
measure stress in real-life environments, but there is often great variability in how
people experience stress and how they express it physiologically. In this project, we
modify the loss function of Support Vector Machines to encode a person's tendency to
feel more or less stressed, and give more importance to the training samples of the
most similar subjects. These changes are validated in a case study where skin
conductance was monitored in nine call center employees during one week of their
regular work. Employees working in this type of setting usually handle high volumes of
calls every day, and they frequently interact with angry and frustrated customers that
lead to high stress levels.

We are performing long-term measurements of autonomic nervous system (ANS)


activity on patients with epilepsy. In certain cases, autonomic symptoms are known to
precede seizures. Usually in our data, the autonomic changes start when the seizure
shows in the EEG, and can be measured with a wristband (much easier to wear every
day than wearing an EEG). We found that the larger the signal we measure on the
wrist, the longer the duration of cortical brain-wave suppression following the seizure.
The duration of the latter is a strong candidate for a biomarker for SUDEP (Sudden
Unexpected Death in Epilepsy), and we are working with scientists and doctors to
better understand this. In addition, bilateral changes in ANS activity may provide
valuable information regarding seizure focus localization and semiology.

October 2015

Page 51

248. BioGlass:
Physiological
Parameter Estimation
Using a
Head-Mounted
Wearable Device

Rosalind W. Picard, Javier Hernandez Rivera, James M. Rehg (Georgia Tech) and
Yin Li (Georgia Tech)

249. BioInsights:
Extracting Personal
Data from Wearable
Motion Sensors

Rosalind W. Picard, Javier Hernandez Rivera and Daniel McDuff

250. BioWatch: Estimation


of Heart and
Breathing Rates from
Wrist Motions

Rosalind W. Picard, Javier Hernandez Rivera and Daniel McDuff

251. Building the


Just-Right-Challenge
in Games and Toys

Rosalind W. Picard and Elliott Hedman

Page 52

What if you could see what calms you down or increases your stress as you go through
your day? What if you could see clearly what is causing these changes for your child or
another loved one? People could become better at accurately interpreting and
communicating their feelings, and better at understanding the needs of those they love.
This work explores the possibility of using sensors embedded in Google Glass, a
head-mounted-wearable device, to robustly measure physiological signals of the
wearer.

Wearable devices are increasingly in long-term close contact with the body, giving them
the potential to capture sensitive, unexpected, and surprising personal data. For
instance, we have recently demonstrated that motion sensors embedded in a
head-mounted wearable device like Google Glass can capture the heart rate and
respiration rate from subtle motions of the head. We are examining additional
signatures of information that can be read from motion sensors in wearable devices: for
example, can a person's identity be validated from their subtle physiological motions,
especially those related to their cardiorespiratory information? How robust are these
motion-signatures to identifying a wearer, even when undergoing changes in posture,
stress, and activity?

Most wrist-wearable smart watches and fitness bands include motion sensors;
however, their use is limited to estimating physical activities such as tracking the
number of steps when walking or jogging. This project explores how we can process
subtle motion information from the wrist to measure cardiac and respiratory activity. In
particular we study the following research questions: How can we use the currently
available motion sensors within wrist-worn devices to accurately estimate heart rate
and breathing rate? How do the wrist-worn estimates compare to traditional sensors
and to state-of-the-art wearable physiological sensors? Does combining measurements
from motion and traditional methods improve performance? How well do the proposed
methods perform in daily life situations to provide unobtrusive physiological
assessments?

Working with the LEGO Group and Hasbro, we looked at the emotional experience of
playing with games and LEGO bricks. We measured participants' skin conductance as
they learned to play with these new toys. By marking the stressful moments we were
able to see what moments in learning should be redesigned. Our findings suggest that
framing is key: how can we help children recognize their achievements? We also saw
how children are excited to take on new responsibilities but are then quickly
discouraged when they aren't given the resources to succeed. Our hope for this work is
that by using skin conductance sensors, we can help companies better understand the
unique perspective of children and build experiences fit for them.

October 2015

MIT Media Lab

252. EDA Explorer

Sara Taylor, Natasha Jaques, Victoria Xia, and Rosalind W. Picard


Electrodermal Activity (EDA) is a physiological indicator of stress and strong emotion.
While an increasing number of wearable devices can collect EDA, analyzing the data to
obtain reliable estimates of stress and emotion remains a difficult problem. We have
built a graphical tool that allows anyone to upload their EDA data and analyze it. Using
a highly accurate machine learning algorithm, we can automatically detect noise within
the data. We can also detect skin conductance responses, which are spikes in the
signal indicating a "fight or flight" response. Users can visualize these results and
download files containing features calculated on the data to be used in their own
analysis. Those interested in machine learning can also view and label their data to
train a machine learning classifier. We are currently adding active learning, so the site
can intelligently select the fewest possible samples for the user to label.
Alumni Contributors: Weixuan 'Vincent' Chen, Szymon Fedor and Akane Sano

253. Fathom: Probabilistic


Graphical Models to
Help Mental Health
Counselors

Karthik Dinakar, Jackie Chen, Henry A. Lieberman, and Rosalind W. Picard

254. FEEL: A Cloud


System for Frequent
Event and
Biophysiological
Signal Labeling

Yadid Ayzenberg and Rosalind W. Picard

255. Got Sleep?

Akane Sano, Rosalind W. Picard

We explore advanced machine learning and reflective user interfaces to scale the
national Crisis Text Line. We are using state-of-the-art probabilistic graphical topic
models and visualizations to help a mental health counselor extract patterns of mental
health issues experienced by participants, and bring large-scale data science to
understanding the distribution of mental health issues in the United States.

The wide availability of low-cost, wearable, biophysiological sensors enables us to


measure how the environment and our experiences impact our physiology. This creates
a new challenge: in order to interpret the collected longitudinal data, we require the
matching contextual information as well. Collecting weeks, months, and years of
continuous biophysiological data makes it unfeasible to rely solely on our memory for
providing the contextual information. Many view maintaining journals as burdensome,
which may result in low compliance levels and unusable data. We present an
architecture and implementation of a system for the acquisition, processing, and
visualization of biophysiological signals and contextual information.

Got Sleep? is an Android application to help people to be aware of their sleep-related


behavioral patterns and tips about how they should change their behaviors to improve
their sleep. The application evaluates people's sleep habits before they start using the
app, tracks day and night behaviors, and provides feedback about what kinds of
behavior changes they should make and whether the improvement is achieved or not.

256. IDA: Inexpensive


Networked Digital
Stethoscope

MIT Media Lab

Yadid Ayzenberg
Complex and expensive medical devices are mainly used in medical facilities by health
professionals. IDA is an attempt to disrupt this paradigm and introduce a new type of
device: easy to use, low cost, and open source. It is a digital stethoscope that can be
connected to the Internet for streaming physiological data to remote clinicians.
Designed to be fabricated anywhere in the world with minimal equipment, it can be
operated by individuals without medical training.

October 2015

Page 53

257. Large-Scale Pulse


Analysis
NEW LISTING

Weixuan 'Vincent' Chen, Javier Hernandez Rivera, Akane Sano and Rosalind W.
Picard
This study aims to bring objective measurement to the multiple "pulse" and "pulse-like"
measures made by practitioners of Traditional Chinese Medicine (TCM). The measures
are traditionally made by manually palpitating the patient's inner wrist in multiple places,
and relating the sensed responses to various medical conditions. Our project brings
several new kinds of objective measurement to this practice, compares their efficacy,
and examines the connection of the measured data to various other measures of health
and stress. Our approach includes the possibility of building a smartwatch application
that can analyze stress and health information from the point of view of TCM.

258. Lensing:
Cardiolinguistics for
Atypical Angina

Catherine Kreatsoulas (Harvard), Rosalind W. Picard, Karthik Dinakar, David Blei


(Columbia) and Matthew Nock (Harvard)

259. Mapping the Stress of


Medical Visits

Rosalind W. Picard and Elliott Hedman

260. Measuring Arousal


During Therapy for
Children with Autism
and ADHD

Rosalind W. Picard and Elliott Hedman

261. Mobile Health


Interventions for
Drug Addiction and
PTSD

Rich Fletcher and Rosalind W. Picard

Page 54

Conversations between two individuals--whether between doctor and patient, mental


health therapist and client, or between two people romantically involved with each
other--are complex. Each participant contributes to the conversation using her or his
own "lens." This project involves advanced probabilistic graphical models to statistically
extract and model these dual lenses across large datasets of real-world conversations,
with applications that can improve crisis and psychotherapy counseling and
patient-cardiologist consultations. We're working with top psychologists, cardiologists,
and crisis counseling centers in the United States.

Receiving a shot or discussing health problems can be stressful, but does not always
have to be. We measure participants' skin conductance as they use medical devices or
visit hospitals and note times when stress occurs. We then prototype possible solutions
and record how the emotional experience changes. We hope work like this will help
bring the medical community closer to their customers.

Physiological arousal is an important part of occupational therapy for children with


autism and ADHD, but therapists do not have a way to objectively measure how
therapy affects arousal. We hypothesize that when children participate in guided
activities within an occupational therapy setting, informative changes in electrodermal
activity (EDA) can be detected using iCalm. iCalm is a small, wireless sensor that
measures EDA and motion, worn on the wrist or above the ankle. Statistical analysis
describing how equipment affects EDA was inconclusive, suggesting that many factors
play a role in how a child's EDA changes. Case studies provided examples of how
occupational therapy affected children's EDA. This is the first study of the effects of
occupational therapy's in situ activities using continuous physiologic measures. The
results suggest that careful case study analyses of the relation between therapeutic
activities and physiological arousal may inform clinical practice.

We are developing a mobile phone-based platform to assist people with chronic


diseases, panic-anxiety disorders, or addictions. Making use of wearable, wireless
biosensors, the mobile phone uses pattern analysis and machine learning algorithms to
detect specific physiological states and perform automatic interventions in the form of
text/images plus sound files and social networking elements. We are currently working
with the Veterans Administration drug rehabilitation program involving veterans with
PTSD.

October 2015

MIT Media Lab

262. Mobisensus:
Predicting Your
Stress/Mood from
Mobile Sensor Data

Akane Sano and Rosalind Picard

263. Modulating
Peripheral and
Cortical Arousal
Using a Musical
Motor Response Task

Grace Leslie, Rosalind Picard, Simon Lui, Annabel Chen

NEW LISTING

Can we recognize stress, mood, and health conditions from wearable sensors and
mobile-phone usage data? We analyze long-term, multi-modal physiological,
behavioral, and social data (electrodermal activity, skin temperature, accelerometer,
phone usage, social network patterns) in daily lives with wearable sensors and mobile
phones to extract bio-markers related to health conditions, interpret inter-individual
differences, and develop systems to keep people healthy.

We are conducting EEG studies to identify the musical features and musical interaction
patterns that universally impact measures of arousal. We hypothesize that we can
induce states of high and low arousal using electrodermal activity (EDA) biofeedback,
and that these states will produce correlated differences in concurrently recorded skin
conductance and EEG data, establishing a connection between peripherally recorded
physiological arousal and cortical arousal as revealed in EEG. We also hypothesize
that manipulation of musical features of a computer-generated musical stimulus track
will produce changes in peripheral and cortical arousal. These musical stimuli and
programmed interactions may be incorporated into music technology therapy, designed
to reduce arousal or increase learning capability by increasing attention. We aim to
provide a framework for the neural basis of emotion-cognition integration of learning
that may shed light on education and possible applications to improve learning by
emotion regulation.

264. Objective Asessment


of Depression and Its
Improvement

Rosalind W. Picard, Szymon Fedor, Brigham and Women's Hospital and


Massachusetts General Hospital

265. Panoply

Rosalind W. Picard and Robert Morris

Current methods to assess depression and then ultimately select appropriate treatment
have many limitations. They are usually based on having a clinician rate scales, which
were developed in the 1960s. Their main drawbacks are lack of objectivity, being
symptom-based and not preventative, and requiring accurate communication. This work
explores new technology to assess depression, including its increase or decrease, in an
automatic, more objective, pre-symptomatic, and cost-effective way using wearable
sensors and smart phones for 24/7 monitoring of different personal parameters such as
physiological data, voice characteristics, sleep, and social interaction. We aim to enable
early diagnosis of depression, prevention of depression, assessment of depression for
people who cannot communicate, better assignment of a treatment, early detection of
treatment remission and response, and anticipation of post-treatment relapse or
recovery.

Panoply is a crowdsourcing application for mental health and emotional wellbeing. The
platform offers a novel approach to computer-based psychotherapy, one that is
optimized for accessibility, engagement, and therapeutic efficacy. A three-week
randomized-controlled trial with 166 participants compared Panoply to an active control
task (online expressive writing). Panoply conferred greater or equal benefits for nearly
every therapeutic outcome measure. Panoply also significantly outperformed the
control task on all measures of engagement.

266. PongCam

Rosalind Picard, Juliana Cherston, and Natasha Jaques


PongCam is a wellbeing project that enables Media Lab ping pong players to save
videos of their best ping pong shots on Youtube. The device is constantly capturing
footage of the ping pong table, storing the most recent footage in a buffer. After a good
shot, a player can hit a big red button and the last 30 seconds of footage will be
uploaded to the PongCam highlights reel on YouTube. We observe how devices of this
sort promote mental and physical wellbeing in the Lab.

MIT Media Lab

October 2015

Page 55

267. Predicting Students'


Wellbeing from
Physiology, Phone,
Mobility, and
Behavioral Data

Natasha Jaques, Sara Taylor, Akane Sano, and Rosalind Picard


The goal of this project is to apply machine learning methods to model the wellbeing of
MIT undergraduate students. Extensive data is obtained from the SNAPSHOT study,
which monitors students on a 24/7 basis, collecting their location, smartphone logs,
sleep schedule, phone and SMS communications, academics, social networks, and
even physiological markers like skin conductance, skin temperature, and acceleration.
We extract features from this data and apply a variety of machine learning algorithms
including Multiple Kernel Learning, Gaussian Mixture Models, and Transfer Learning,
among others. Interesting findings include: when participants visit novel locations they
tend to be happier; when they use their phones or stay indoors for long periods they
tend to be unhappy; and when several dimensions of wellbeing (including stress,
happiness, health, and energy) are learned together, classification accuracy improves.
Alumni Contributors: Asaph Azaria and Asma Ghandeharioun

268. Real-Time
Assessment of
Suicidal Thoughts
and Behaviors

Rosalind W. Picard, Szymon Fedor, Harvard and Massachusetts General Hospital

269. SmileTracker

Natasha Jaques, Weixuan 'Vincent' Chen and Rosalind Picard

Depression correlated with anxiety is one of the key factors leading to suicidal behavior,
and is among the leading causes of death worldwide. Despite the scope and
seriousness of suicidal thoughts and behaviors, we know surprisingly little about what
suicidal thoughts look like in nature (e.g., How frequent, intense, and persistent are they
among those who have them? What cognitive, affective/physiological, behavioral, and
social factors trigger their occurrence?). The reason for this lack of information is that
historically researchers have used retrospective self-report to measure suicidal
thoughts, and have lacked the tools to measure them as they naturally occur. In this
work we explore use of wearable devices and smartphones to identify behavioral,
affective, and physiological predictors of suicidal thoughts and behaviors.

SmileTracker is a system designed to capture naturally occurring instances of positive


emotion during the course of normal interaction with a computer. A facial expression
recognition algorithm is applied to images captured with the user's webcam. When the
user smiles, both a photo and a screenshot are recorded and saved to the user's profile
for later review. Based on positive psychology research, we hypothesize that the act of
reviewing content that led to smiles will improve positive affect, and consequently,
overall wellbeing.

270. SNAPSHOT Expose


NEW LISTING

Page 56

Miriam Zisook, Sara Taylor, Akane Sano and Rosalind Picard


In this project, we apply what we have learned from the SNAPSHOT study to the
problem of changing behavior. We explore the design of user-centered tools that can
harness the experience of collecting and reflecting on personal data to promote healthy
behaviors, including stress management and sleep regularity. We will draw on
commonly used theories of behavior change as the inspiration for distinct conceptual
designs for a behavior change application based on the SNAPSHOT study. This
approach will enable us to compare the types of visualization strategies that are most
meaningful and useful for acting on each theory.

October 2015

MIT Media Lab

271. SNAPSHOT Study

Akane Sano, Amy Yu, Sara Taylor, Cesar Hidalgo and Rosalind Picard
The SNAPSHOT study seeks to measure Sleep, Networks, Affect, Performance,
Stress, and Health using Objective Techniques. It is a NIH-funded collaborative
research project between the Affective Computing group, Macro Connections group,
and Harvard Medical School's Brigham & Women's hospital. We have been running this
study since fall 2013 to collect one month of data from 50 MIT undergraduate students
who are socially connected every semester. We have collected data from about 170
participants, totaling over 5,000 days of data. We measure physiological, behavioral,
environmental, and social data using mobile phones, wearable sensors, surveys, and
lab studies. We investigate how daily behaviors and social connectivity influence sleep
behaviors, health, and outcomes such as mood, stress, and academic performance.
Using this multimodal data, we are developing models to predict onsets of sadness and
stress. This study will provide insights into behavioral choices for wellbeing and
performance.

272. StoryScape

Rosalind W. Picard and Micah Eckhardt


Stories, language, and art are at the heart StoryScape. While StoryScape began as a
tool to meet the challenging language learning needs of children diagnosed with autism,
it has become much more. StoryScape was created to be the first truly open and
customizable platform for creating animated, interactive storybooks that can interact
with the physical world. Download the android app:
https://play.google.com/store/apps/details?id=edu.mit.media.storyscape and make your
own amazing stories at https://storyscape.io/.

273. The Challenge

Natasha Jaques, Niaja Farve, Pattie Maes and Rosalind W. Picard


Individuals who work in sedentary occupations are at increased risk of a number of
serious health consequences. This project involves both a tool and an experiment
aimed at decreasing sedentary activity and promoting social connections among
members of the MIT Media Lab. Our system will ask participants to sign up for short
physical challenges (ping pong, foosball, walking) and pair them with a partner to
perform the activity. Participants' overall activity levels will be monitored with an activity
tracker during the course of the study to assess the effectiveness of the system.

274. Tributary

Yadid Ayzenberg, Rosalind Picard


The proliferation of smartphones and wearable sensors is creating very large data sets
that may contain useful information. However, the magnitude of generated data creates
new challenges as well. Processing and analyzing these large data sets in an efficient
manner requires computational tools. Many of the traditional analytics tools are not
optimized for dealing with large datasets. Tributary is a parallel engine for searching
and analyzing sensor data. The system utilizes large clusters of commodity machines
to enable in-memory processing of sensor time-series signals, making it possible to
search through billions of samples in seconds. Users can access a rich library of
statistics and digital signal processing functions or write their own in a variety of
languages.

275. Unlocking Sleep

Rosalind W. Picard, Thariq Shihipar and Sara Taylor


Despite a vast body of knowledge about the importance of sleep, our daily schedules
are often planned around work and social events, not healthy sleep. While we're
prompted throughout the day by devices and people to plan and think about our
schedules in terms of things to do, sleep is rarely considered until we're tired and it's
late. This project proposes a way that our everyday use of technology can help improve
sleep habits. Smartphone unlock screens are an unobtrusive way of prompting user
reflection throughout the day by posing "microquestions" as users unlock their phone.
The questions are easily answered with a single-swipe. Since we unlock our phones 50
to 200 times per day, microquestions can collect information with minimal intrusiveness
to the user s daily life. Can these swipe-questions help users mentally plan their day
around sleep, and trigger healthier sleep behaviors?

MIT Media Lab

October 2015

Page 57

276. Valinor: Mathematical


Models to
Understand and
Predict Self-Harm

Rosalind W. Picard, Karthik Dinakar, Eric Horvitz (Microsoft Research) and


Matthew Nock (Harvard)

277. Wavelet-Based
Motion Artifact
Removal for
Electrodermal
Activity

Weixuan 'Vincent' Chen, Natasha Jaques, Sara Taylor, Akane Sano, Szymon
Fedor and Rosalind W. Picard

NEW LISTING

We are developing statistical tools for understanding, modeling, and predicting


self-harm by using advanced probabilistic graphical models and fail-soft machine
learning in collaboration with Harvard University and Microsoft Research.

Electrodermal activity (EDA) recording is a powerful, widely used tool for monitoring
psychological or physiological arousal. However, analysis of EDA is hampered by its
sensitivity to motion artifacts. We propose a method for removing motion artifacts from
EDA, measured as skin conductance (SC), using a stationary wavelet transform (SWT).
We modeled the wavelet coefficients as a Gaussian mixture distribution corresponding
to the underlying skin conductance level (SCL) and skin conductance responses
(SCRs). The goodness-of-fit of the model was validated on ambulatory SC data. We
evaluated the proposed method in comparison with three previous approaches. Our
method achieved a greater reduction of artifacts while retaining motion-artifact-free
data.

Iyad Rahwan: Scalable Cooperation


Reimagining the way society organizes, cooperates, and governs itself.

278. Autonomous
Vehicles Need
Experimental Ethics:
Are We Ready for
Utilitarian Cars?
NEW LISTING

279. Crowdsourcing a
Manhunt
NEW LISTING

280. Crowdsourcing
Under Attack
NEW LISTING

Page 58

Iyad Rahwan, Azim Shariff and Jean-Franois Bonnefon


The wide adoption of self-driving, Autonomous Vehicles (AVs) promises to dramatically
reduce the number of traffic accidents. Some accidents, though, will be inevitable,
because some situations will require AVs to choose the lesser of two evils; for example,
running over a pedestrian on the road or the sidewalk. It is a formidable challenge to
define the algorithms that will guide AVs confronted with such moral dilemmas. In
particular, these moral algorithms will need to accomplish three potentially incompatible
objectives: being consistent, not causing public outrage, and not discouraging buyers.
We argue to achieve these objectives, manufacturers and regulators will need
psychologists to apply the methods of experimental ethics to situations involving AVs
and unavoidable harm. To illustrate our claim, we report three surveys showing that
laypersons are relatively comfortable with utilitarian AVs, programmed to minimize the
death toll in case of unavoidable harm.
Iyad Rahwan and Sohan Dsouza
People often say that we live in a small world. In a brilliant experiment, legendary social
psychologist Stanley Milgram proved the six degrees of separation hypothesis: that
everyone is six or fewer steps away, by way of introduction, from any other person in
the world. But how far are we, in terms of time, from anyone on earth? Our team won
the Tag Challenge, showing it is possible to find a person, using only his or her mug
shot, within 12 hours.
Iyad Rahwan and Manuel Cebrian
The Internet has unleashed the capacity for planetary-scale collective problem solving
(also known as crowdsourcing). However, the very openness of crowdsourcing makes it
vulnerable to sabotage by rogue or competitive actors. To explore the effect of errors
and sabotage on the performance of crowdsourcing, we analyze data from the DARPA
Shredder Challenge, a prize competition for exploring methods to reconstruct
documents shredded by a variety of paper shredding techniques.

October 2015

MIT Media Lab

281. Honest Crowds


NEW LISTING

Iyad Rahwan, Lorenzo Coviello, Morgan Frank, Lijun Sun, Manuel Cebrian and
NICTA
The Honest Crowds project addresses shortcomings of traditional survey techniques in
the modern information and big data age. Web survey platforms, such as Amazon's
Mechanical Turk and CrowdFlower, bring together millions of surveys and millions of
survey participants, which means paying a flat rate for each completed survey may lead
to survey responses that lack desirable care and forethought. Rather than allowing
survey takers to maximize their reward by completing as many surveys as possible, we
demonstrate how strategic incentives can be used to actually reward information and
honesty rather than just participation. The incentive structures that we propose provide
scalable solutions for the new paradigm of survey and active data collection.

Ramesh Raskar: Camera Culture


Making the invisible visible inside our bodies, around us, and beyond for health, work, and
connection.

282. 6D Display

Ramesh Raskar, Martin Fuchs, Hans-Peter Seidel, and Hendrik P. A. Lensch


Is it possible to create passive displays that respond to changes in viewpoint and
incident light conditions? Holograms and 4D displays respond to changes in viewpoint.
6D displays respond to changes in viewpoint as well as surrounding light. We encode
the 6D reflectance field into an ordinary 2D film. These displays are completely passive
and do not require any power. Applications include novel instruction manuals and mood
lights.

283. A Switchable
Light-Field Camera

Matthew Hirsch, Sriram Sivaramakrishnan, Suren Jayasuriya, Albert Wang,


Aloysha Molnar, Ramesh Raskar, and Gordon Wetzstein
We propose a flexible light-field camera architecture that represents a convergence of
optics, sensor electronics, and applied mathematics. Through the co-design of a sensor
that comprises tailored, angle-sensitive pixels and advanced reconstruction algorithms,
we show that, contrary to light-field cameras today, our system can use the same
measurements captured in a single sensor image to recover either a high-resolution 2D
image, a low-resolution 4D light field using fast, linear processing, or a high-resolution
light field using sparsity-constrained optimization.

284. Bokode:
Imperceptible Visual
Tags for
Camera-Based
Interaction from a
Distance

MIT Media Lab

Ramesh Raskar, Ankit Mohan, Grace Woo, Shinsaku Hiura and Quinn Smithwick
With over a billion people carrying camera-phones worldwide, we have a new
opportunity to upgrade the classic bar code to encourage a flexible interface between
the machine world and the human world. Current bar codes must be read within a short
range and the codes occupy valuable space on products. We present a new, low-cost,
passive optical design so that bar codes can be shrunk to fewer than 3mm and can be
read by unmodified ordinary cameras several meters away.

October 2015

Page 59

285. CATRA: Mapping of


Cataract Opacities
Through an
Interactive Approach

Ramesh Raskar, Vitor Pamplona, Erick Passos, Jan Zizka, Jason Boggess, David
Schafran, Manuel M. Oliveira, Everett Lawson, and Estebam Clua

286. Coded Computational


Photography

Jaewon Kim, Ahmed Kirmani, Ankit Mohan and Ramesh Raskar

287. Coded Focal Stack


Photography

Ramesh Raskar, Gordon Wetzstein, Xing Lin and Tsinghua University

288. Compressive
Light-Field Camera:
Next Generation in 3D
Photography

Kshitij Marwah, Gordon Wetzstein, Yosuke Bando and Ramesh Raskar

289. Eyeglasses-Free
Displays

Ramesh Raskar and Gordon Wetzstein

Page 60

We introduce a novel interactive method to assess cataracts in the human eye by


crafting an optical solution that measures the perceptual impact of forward scattering on
the foveal region. Current solutions rely on highly trained clinicians to check the back
scattering in the crystallin lens and test their predictions on visual acuity tests.
Close-range parallax barriers create collimated beams of light to scan through
sub-apertures, scattering light as it strikes a cataract. User feedback generates maps
for opacity, attenuation, contrast, and local point-spread functions. The goal is to allow
a general audience to operate a portable, high-contrast, light-field display to gain a
meaningful understanding of their own visual conditions. The compiled maps are used
to reconstruct the cataract-affected view of an individual, offering a unique approach for
capturing information for screening, diagnostic, and clinical analysis.

Computational photography is an emerging multi-disciplinary field at the intersection of


optics, signal processing, computer graphics and vision, electronics, art, and online
sharing in social networks. The first phase of computational photography was about
building a super-camera that has enhanced performance in terms of the traditional
parameters, such as dynamic range, field of view, or depth of field. We call this Epsilon
Photography. The next phase of computational photography is building tools that go
beyond the capabilities of this super-camera. We call this Coded Photography. We can
code exposure, aperture, motion, wavelength, and illumination. By blocking light over
time or space, we can preserve more details about the scene in the recorded single
photograph.

We present coded focal stack photography as a computational photography paradigm


that combines a focal sweep and a coded sensor readout with novel computational
algorithms. We demonstrate various applications of coded focal stacks, including
photography with programmable non-planar focal surfaces and multiplexed focal stack
acquisition. By leveraging sparse coding techniques, coded focal stacks can also be
used to recover a full-resolution depth and all-in-focus (AIF) image from a single
photograph. Coded focal stack photography is a significant step towards a
computational camera architecture that facilitates high-resolution post-capture
refocusing, flexible depth of field, and 3D imaging.

Consumer photography is undergoing a paradigm shift with the development of light


field cameras. Commercial products such as those by Lytro and Raytrix have begun to
appear in the marketplace with features such as post-capture refocus, 3D capture, and
viewpoint changes. These cameras suffer from two major drawbacks: major drop in
resolution (converting a 20 MP sensor to a 1 MP image) and large form factor. We have
developed a new light-field camera that circumvents traditional resolution losses (a 20
MP sensor turns into a full-sensor resolution refocused image) in a thin form factor that
can fit into traditional DSLRs and mobile phones.

Millions of people worldwide need glasses or contact lenses to see or read properly.
We introduce a computational display technology that predistorts the presented content
for an observer, so that the target image is perceived without the need for eyewear. We
demonstrate a low-cost prototype that can correct myopia, hyperopia, astigmatism, and
even higher-order aberrations that are difficult to correct with glasses.

October 2015

MIT Media Lab

290. Imaging Behind


Diffusive Layers

Ramesh Raskar, Barmak Heshmat Dehkordi and Dan Raviv

291. Imaging through


Scattering Media
Using
Femtophotography

Ramesh Raskar, Christopher Barsi and Nikhil Naik

292. Inverse Problems in


Time-of-Flight
Imaging

Ayush Bhandari and Ramesh Raskar

293. Layered 3D:


Glasses-Free 3D
Printing

Gordon Wetzstein, Douglas Lanman, Matthew Hirsch, Wolfgang Heidrich, and


Ramesh Raskar

294. LensChat: Sharing


Photos with
Strangers

Ramesh Raskar, Rob Gens and Wei-Chao Chen

295. Looking Around


Corners

Andreas Velten, Di Wu, Christopher Barsi, Ayush Bhandari, Achuta Kadambi,


Nikhil Naik, Micha Feigin, Daniel Raviv, Thomas Willwacher, Otkrist Gupta, Ashok
Veeraraghavan, Moungi G. Bawendi, and Ramesh Raskar

The use of fluorescent probes and the recovery of their lifetimes allow for significant
advances in many imaging systems, in particular medical imaging systems. Here, we
propose and experimentally demonstrate reconstructing the locations and lifetimes of
fluorescent markers hidden behind a turbid layer. This opens the door to various
applications for non-invasive diagnosis, analysis, flowmetry, and inspection. The
method is based on a time-resolved measurement which captures information about
both fluorescence lifetime and spatial position of the probes. To reconstruct the scene,
the method relies on a sparse optimization framework to invert time-resolved
measurements. This wide-angle technique does not rely on coherence, and does not
require the probes to be directly in line of sight of the camera, making it potentially
suitable for long-range imaging.

We use time-resolved information in an iterative optimization algorithm to recover


reflectance of a three-dimensional scene hidden behind a diffuser. We demonstrate
reconstruction of large images without relying on knowledge of diffuser properties.

We are exploring mathematical modeling of time-of-flight imaging problems and


solutions.

We are developing tomographic techniques for image synthesis on displays composed


of compact volumes of light-attenuating material. Such volumetric attenuators recreate
a 4D light field or high-contrast 2D image when illuminated by a uniform backlight.
Since arbitrary views may be inconsistent with any single attenuator, iterative
tomographic reconstruction minimizes the difference between the emitted and target
light fields, subject to physical constraints on attenuation. For 3D displays, spatial
resolution, depth of field, and brightness are increased, compared to parallax barriers.
We conclude by demonstrating the benefits and limitations of attenuation-based light
field displays using an inexpensive fabrication method: separating multiple printed
transparencies with acrylic sheets.

With networked cameras in everyone's pockets, we are exploring the practical and
creative possibilities of public imaging. LensChat allows cameras to communicate with
each other using trusted optical communications, allowing users to share photos with a
friend by taking pictures of each other, or borrow the perspective and abilities of many
cameras.

Using a femtosecond laser and a camera with a time resolution of about one trillion
frames per second, we recover objects hidden out of sight. We measure speed-of-light
timing information of light scattered by the hidden objects via diffuse surfaces in the
scene. The object data are mixed up and are difficult to decode using traditional
cameras. We combine this "time-resolved" information with novel reconstruction
algorithms to untangle image information and demonstrate the ability to look around
corners.
Alumni Contributors: Andreas Velten, Otkrist Gupta and Di Wu

MIT Media Lab

October 2015

Page 61

296. NETRA: Smartphone


Add-On for Eye Tests

Vitor Pamplona, Manuel Oliveira, Erick Passos, Ankit Mohan, David Schafran,
Jason Boggess and Ramesh Raskar
Can a person look at a portable display, click on a few buttons, and recover his
refractive condition? Our optometry solution combines inexpensive optical elements
and interactive software components to create a new optometry device suitable for
developing countries. The technology allows for early, extremely low-cost, mobile, fast,
and automated diagnosis of the most common refractive eye disorders: myopia
(nearsightedness), hypermetropia (farsightedness), astigmatism, and presbyopia
(age-related visual impairment). The patient overlaps lines in up to eight meridians and
the Android app computes the prescription. The average accuracy is comparable to the
prior art and in some cases, even better. We propose the use of our technology as a
self-evaluation tool for use in homes, schools, and at health centers in developing
countries, and in places where an optometrist is not available or is too expensive.

297. New Methods in


Time-of-Flight
Imaging

Ramesh Raskar, Christopher Barsi, Ayush Bhandari, Anshuman Das, Micha


Feigin-Almon and Achuta Kadambi
Time-of-flight (ToF) cameras are commercialized consumer cameras that provide a
depth map of a scene, with many applications in computer vision and quality
assurance. Currently, we are exploring novel ways of integrating the camera
illumination and detection circuits with computational methods to handle challenging
environments, including multiple scattering and fluorescence emission.
Alumni Contributor: Refael Whyte

298. PhotoCloud:
Personal to Shared
Moments with Angled
Graphs of Pictures

Ramesh Raskar, Aydin Arpa, Otkrist Gupta and Gabriel Taubin

299. Polarization Fields:


Glasses-Free 3DTV

Douglas Lanman, Gordon Wetzstein, Matthew Hirsch, Wolfgang Heidrich, and


Ramesh Raskar

We present a near real-time system for interactively exploring a collectively captured


moment without explicit 3D reconstruction. Our system favors immediacy and local
coherency to global consistency. It is common to represent photos as vertices of a
weighted graph. The weighted angled graphs of photos used in this work can be
regarded as the result of discretizing the Riemannian geometry of the high dimensional
manifold of all possible photos. Ultimately, our system enables everyday people to take
advantage of each others' perspectives in order to create on-the-spot spatiotemporal
visual experiences similar to the popular bullet-time sequence. We believe that this type
of application will greatly enhance shared human experiences, spanning from events as
personal as parents watching their children's football game to highly publicized
red-carpet galas.

We introduce polarization field displays as an optically efficient design for dynamic light
field display using multi-layered LCDs. Such displays consist of a stacked set of liquid
crystal panels with a single pair of crossed linear polarizers. Each layer is modeled as a
spatially controllable polarization rotator, as opposed to a conventional spatial light
modulator that directly attenuates light. We demonstrate that such displays can be
controlled, at interactive refresh rates, by adopting the SART algorithm to
tomographically solve for the optimal spatially varying polarization state rotations
applied by each layer. We validate our design by constructing a prototype using
modified off-the-shelf panels. We demonstrate interactive display using a GPU-based
SART implementation supporting both polarization-based and attenuation-based
architectures.

300. Portable Retinal


Imaging

Everett Lawson, Jason Boggess, Alex Olwal, Gordon Wetzstein, and Siddharth
Khullar
The major challenge in preventing blindness is identifying patients and bringing them to
specialty care. Diseases that affect the retina, the image sensor in the human eye, are
particularly challenging to address, because they require highly trained eye specialists
(ophthalmologists) who use expensive equipment to visualize the inner parts of the eye.
Diabetic retinopathy, HIV/AIDS-related retinitis, and age-related macular degeneration

Page 62

October 2015

MIT Media Lab

are three conditions that can be screened and diagnosed to prevent blindness caused
by damage to retina. We exploit a combination of two novel ideas to simplify the
constraints of traditional devices, with simplified optics and cleaver illumination in order
to capture and visualize images of the retina in a standalone device easily operated by
the user. Prototypes are conveniently embedded in either a mobile hand-held retinal
camera, or wearable eyeglasses.

301. Reflectance
Acquisition Using
Ultrafast Imaging

Ramesh Raskar and Nikhil Naik


We demonstrate a new technique that allows a camera to rapidly acquire reflectance
properties of objects "in the wild" from a single viewpoint, over relatively long distances
and without encircling equipment. This project has a wide variety of applications in
computer graphics, including image relighting, material identification, and image editing.
Alumni Contributor: Andreas Velten

302. Second Skin: Motion


Capture with
Actuated Feedback
for Motor Learning

Ramesh Raskar, Kenichiro Fukushi, Christopher Schonauer and Jan Zizka


We have created a 3D motion-tracking system with automatic, real-time vibrotactile
feedback and an assembly of photo-sensors, infrared projector pairs, vibration motors,
and a wearable suit. This system allows us to enhance and quicken the motor learning
process in a variety of fields such as healthcare (physiotherapy), entertainment (dance),
and sports (martial arts).
Alumni Contributor: Dennis Ryan Miaw

303. Shield Field Imaging

Jaewon Kim
We present a new method for scanning 3D objects in a single-shot, shadow-based
method. We decouple 3D occluders from 4D illumination using shield fields: the 4D
attenuation function which acts on any light field incident on an occluder. We then
analyze occluder reconstruction from cast shadows, leading to a single-shot light field
camera for visual hull reconstruction.

304. Single Lens Off-Chip


Cellphone
Microscopy

Ramesh Raskar and Aydin Arpa

305. Single-Photon
Sensitive Ultrafast
Imaging

Ramesh Raskar and Barmak Heshmat Dehkordi

306. Skin Perfusion


Photography

Guy Satat and Ramesh Raskar

MIT Media Lab

Within the last few years, cellphone subscriptions have spread widely and now cover
even the remotest parts of the planet. Adequate access to healthcare, however, is not
widely available, especially in developing countries. We propose a new approach to
converting cellphones into low-cost scientific devices for microscopy. Cellphone
microscopes have the potential to revolutionize health-related screening and analysis
for a variety of applications, including blood and water tests. Our optical system is more
flexible than previously proposed mobile microscopes, and allows for wide field-of-view
panoramic imaging, the acquisition of parallax, and coded background illumination,
which optically enhances the contrast of transparent and refractive specimens.

The ability to record images with extreme temporal resolution enables a diverse range
of applications, such as time-of-flight depth imaging and characterization of ultrafast
processes. Here we present a demonstration of the potential of single-photon detector
arrays for visualization and rapid characterization of events evolving on picosecond
time scales. The single-photon sensitivity, temporal resolution, and full-field imaging
capability enables the observation of light-in-flight in air, as well as the measurement of
laser-induced plasma formation and dynamics in its natural environment. The extreme
sensitivity and short acquisition times pave the way for real-time imaging of ultrafast
processes or visualization and tracking of objects hidden from view.

Skin and tissue perfusion measurements are important parameters for diagnosis of
wounds and burns, and for monitoring plastic and reconstructive surgeries. In this
project, we use a standard camera and a laser source in order to image blood-flow

October 2015

Page 63

speed in skin tissue. We show results of blood-flow maps of hands, arms, and fingers.
We combine the complex scattering of laser light from blood with computational
techniques found in computer science.
Alumni Contributor: Christopher Barsi

307. Slow Display

Daniel Saakes, Kevin Chiu, Tyler Hutchison, Biyeun Buczyk, Naoya Koizumi and
Masahiko Inami
How can we show our 16-megapixel photos from our latest trip on a digital display?
How can we create screens that are visible in direct sunlight as well as complete
darkness? How can we create large displays that consume less than 2W of power?
How can we create design tools for digital decal application and intuitive-computer
aided modeling? We introduce a display that is high resolution but updates at a low
frame rate, a slow display. We use lasers and monostable light-reactive materials to
provide programmable space-time resolution. This refreshable, high resolution display
exploits the time decay of monostable materials, making it attractive in terms of cost
and power requirements. Our effort to repurpose these materials involves solving
underlying problems in color reproduction, day-night visibility, and optimal time
sequences for updating content.

308. SpeckleSense

Alex Olwal, Andrew Bardagjy, Jan Zizka and Ramesh Raskar


Motion sensing is of fundamental importance for user interfaces and input devices. In
applications where optical sensing is preferred, traditional camera-based approaches
can be prohibitive due to limited resolution, low frame rates, and the required
computational power for image processing. We introduce a novel set of motion-sensing
configurations based on laser speckle sensing that are particularly suitable for
human-computer interaction. The underlying principles allow these configurations to be
fast, precise, extremely compact, and low cost.

309. SpecTrans:
Classification of
Transparent Materials
and Interactions

Munehiko Sato, Alex Olwal, Boxin Shi, Shigeo Yoshida, Atsushi Hiyama,
Michitaka Hirose and Tomohiro Tanikawa, Ramesh Raskar

310. StreetScore

Nikhil Naik, Jade Philipoom, Ramesh Raskar, Cesar Hidalgo

Surface and object recognition is of significant importance in ubiquitous and wearable


computing. While various techniques exist to infer context from material properties and
appearance, they are typically neither designed for real-time applications nor for
optically complex surfaces that may be specular, textureless, and even transparent.
These materials are, however, becoming increasingly relevant in HCI for transparent
displays, interactive surfaces, and ubiquitous computing. We present SpecTrans, a new
sensing technology for surface classification of exotic materials, such as glass,
transparent plastic, and metal. The proposed technique extracts optical features by
employing laser and multi-directional, multi-spectral LED illumination that leverages the
material's optical properties. The sensor hardware is small in size, and the proposed
classification method requires significantly lower computational cost than conventional
image-based methods, which use texture features or reflectance analysis, thereby
providing real-time performance for ubiquitous computing.

StreetScore is a machine learning algorithm that predicts the perceived safety of a


streetscape. StreetScore was trained using 2,920 images of streetscapes from New
York and Boston and their rankings for perceived safety obtained from a crowdsourced
survey. To predict an image's score, StreetScore decomposes this image into features
and assigns the image a score based on the associations between features and scores
learned from the training dataset. We use StreetScore to create a collection of map
visualizations of perceived safety of street views from cities in the United States.
StreetScore allows us to scale up the evaluation of streetscapes by several orders of
magnitude when compared to a crowdsourced survey. StreetScore can empower
research groups working on connecting urban perception with social and economic
outcomes by providing high resolution data on urban perception.

Page 64

October 2015

MIT Media Lab

311. Tensor Displays:


High-Quality
Glasses-Free 3D TV

Gordon Wetzstein, Douglas Lanman, Matthew Hirsch and Ramesh Raskar

312. Theory Unifying Ray


and Wavefront
Lightfield
Propagation

George Barbastathis, Ramesh Raskar, Belen Masia, Se Baek Oh and Tom


Cuypers

313. Time-of-Flight
Microwave Camera

Ramesh Raskar, Micha Feigin-Almon, Andrew Temme and Gregory Charvat

NEW LISTING

314. Towards In-Vivo


Biopsy

Tensor displays are a family of glasses-free 3D displays comprising all architectures


employing (a stack of) time-multiplexed LCDs illuminated by uniform or directional
backlighting. We introduce a unified optimization framework that encompasses all
tensor display architectures and allows for optimal glasses-free 3D display. We
demonstrate the benefits of tensor displays by constructing a reconfigurable prototype
using modified LCD panels and a custom integral imaging backlight. Our efficient,
GPU-based NTF implementation enables interactive applications. In our experiments
we show that tensor displays reveal practical architectures with greater depths of field,
wider fields of view, and thinner form factors, compared to prior automultiscopic
displays.

This work focuses on bringing powerful concepts from wave optics to the creation of
new algorithms and applications for computer vision and graphics. Specifically,
ray-based, 4D lightfield representation, based on simple 3D geometric principles, has
led to a range of new applications that include digital refocusing, depth estimation,
synthetic aperture, and glare reduction within a camera or using an array of cameras.
The lightfield representation, however, is inadequate to describe interactions with
diffractive or phase-sensitive optical elements. Therefore we use Fourier optics
principles to represent wavefronts with additional phase information. We introduce a
key modification to the ray-based model to support modeling of wave phenomena. The
two key ideas are "negative radiance" and a "virtual light projector." This involves
exploiting higher dimensional representation of light transport.

Our architecture takes a hybrid approach to microwaves and treats them like waves of
light. Most other work places antennas in a 2D arrangement to directly sample the RF
reflections that return. Instead of placing antennas in a 2D arrangment, we use a single,
passive, parabolic reflector (dish) as a lens. You can think of every point on that dish as
an antenna with a fixed phase-offset. This means that the lens acts as a fixed set of 2D
antennas which are very dense and spaced across a large aperture. We then sample
the focal-plane of that lens. This architecture makes it possible for us to capture higher
resolution images at a lower cost.
Guy Satat, Barmak Heshmat, Dan Raviv and Ramesh Raskar
A new method to detect and distinguish between different types of fluorescent
materials. The suggested technique has provided a dramatically larger depth range
compared to previous methods; thus it enables medical diagnosis of body tissues
without removing the tissue from the body, which is the current medical standard. It
uses fluorescent probes, which are commonly used in medical diagnosis. One of these
parameters is the fluorescence lifetime, that is the average time the fluorescence
emission lasts. The new method can distinguish between different fluorescence
lifetimes, which allows diagnosis of deep tissues. Locating fluorescence probes in the
body using this method can, for example, indicate the location of a tumor in deep
tissue, and classify it as malignant or benign according to the fluorescence lifetime, thus
eliminating the need for X-ray or biopsy.
Alumni Contributor: Christopher Barsi

315. Trillion Frames Per


Second Camera

Andreas Velten, Di Wu, Adrin Jarabo, Belen Masia, Christopher Barsi, Chinmaya
Joshi, Everett Lawson, Moungi Bawendi, Diego Gutierrez, and Ramesh Raskar
We have developed a camera system that captures movies at an effective rate of
approximately one trillion frames per second. In one frame of our movie, light moves
only about 0.6 mm. We can observe pulses of light as they propagate through a scene.
We use this information to understand how light propagation affects image formation
and to learn things about a scene that are invisible to a regular camera.

MIT Media Lab

October 2015

Page 65

316. Ultrasound
Tomography

Ramesh Raskar, Micha Feigin-Almon and Brian Anthony

317. Unbounded High


Dynamic Range
Photography Using a
Modulo Camera

Ramesh Raskar, Boxin Shi, Hang Zhao, Christy Fernandez-Cull and Sai-Kit Yeung

NEW LISTING

318. Vision on Tap

Traditional medical ultrasound assumes that we are imaging ideal liquids. We are
interested in imaging muscle and bone as well as measuring elastic properties of
tissues, all of which are places where this assumption fails quite miserably. Interested
in cancer detections, Duchenne muscular dystrophy, and prosthetic fitting, we use
tomographic techniques as well as ideas from seismic imaging to deal with these
issues.

We present a novel framework to extend the dynamic range of images called


Unbounded High Dynamic Range (UHDR) photography with a modulo camera. A
modulo camera could theoretically take unbounded radiance levels by keeping only the
least significant bits. We show that with limited bit depth, very high radiance levels can
be recovered from a single modulus image with our newly proposed unwrapping
algorithm for natural images. We can also obtain an HDR image with details equally
well preserved for all radiance levels by merging the least number of modulus images.
Synthetic experiment and experiment with a real modulo camera show the
effectiveness of the proposed approach.
Ramesh Raskar
Computer vision is a class of technologies that lets computers use cameras to
automatically stitch together panoramas, reconstruct 3D geometry from multiple
photographs, and even tell you when the water's boiling. For decades, this technology
has been advancing mostly within the confines of academic institutions and research
labs. Vision on Tap is our attempt to bring computer vision to the masses.
Alumni Contributor: Kevin Chiu

319. VisionBlocks

Chunglin Wen and Ramesh Raskar


VisionBlocks is an on-demand, in-browser, customizable, computer-vision
application-building platform for the masses. Even without any prior programming
experience, users can create and share computer vision applications. End-users drag
and drop computer vision processing blocks to create their apps. The input feed could
be either from a user's webcam or a video from the Internet. VisionBlocks is a
community effort where researchers obtain fast feedback, developers monetize their
vision applications, and consumers can use state-of-the-art computer vision techniques.
We envision a Vision-as-a-Service (VaaS) over-the-web model, with easy-to-use
interfaces for application creation for everyone.
Alumni Contributors: Abhijit Bendale, Kshitij Marwah and Jason Boggess and Kevin
Chiu

320. Visual Lifelogging

Hyowon Lee, Nikhil Naik, Lubos Omelina, Daniel Tokunaga, Tiago Lucena and
Ramesh Raskar
We are creating a novel visual lifelogging framework for applications in personal life and
workplaces.

Page 66

October 2015

MIT Media Lab

Mitchel Resnick: Lifelong Kindergarten


Engaging people in creative learning experiences.

321. App Inventor

Hal Abelson, Eric Klopfer, Mitchel Resnick, Andrew McKinney, CSAIL and
Scheller Teacher Education Program
App Inventor is an open-source tool that democratizes app creation. By combining
LEGO-like blocks onscreen, even users with no prior programming experience can use
App Inventor to create their own mobile applications. Currently, App Inventor has over
2,000,000 users and is being taught by universities, schools, and community centers
worldwide. In those initiatives, students not only acquire important technology skills
such as computer programming, but also have the opportunity to apply computational
thinking concepts to many fields including science, health, education, business, social
action, entertainment, and the arts. Work on App Inventor was initiated in Google
Research by Hal Abelson and is continuing at the MIT Media Lab as part of its Center
for Mobile Learning, a collaboration with the MIT Computer Science and Artificial
Intelligence Laboratory (CSAIL) and the Scheller Teacher Education Program (STEP).

322. Askii

J. Philipp Schmidt, Juliana Nazar


Askii is an SMS-based system that allows adult learners to study for a certification
exam while on their commute. When learners have a spare five minutes, they can
simply text Askii to begin their customized lessons. Askii will respond with a curated set
of questions and links to content that learners can study on the go. We have begun
building this prototype for learners to study for the US Naturalization Exam and plan to
expand to other certification courses. Askii is a prototype within the larger Making
Learning Work project.

323. Build in Progress

Tiffany Tseng and Mitchel Resnick


Build in Progress is a platform for sharing the story of your design process. With Build
in Progress, makers document as they develop their design processes, incorporating
iterations and failures along the way and getting feedback as they develop their
projects.

324. Computer Clubhouse

Mitchel Resnick, Natalie Rusk, Chris Garrity, Alisha Panjwani


At Computer Clubhouse after-school centers, young people (ages 10-18) from
low-income communities learn to express themselves creatively with new technologies.
Clubhouse members work on projects based on their own interests, with support from
adult mentors. By creating their own animations, interactive stories, music videos, and
robotic constructions, Clubhouse members become more capable, confident, and
creative learners. The first Computer Clubhouse was established in 1993, as a
collaboration between the Lifelong Kindergarten group and The Computer Museum
(now part of the Boston Museum of Science). With financial support from Intel
Corporation, the network has expanded to more than 100 centers in 20 countries,
serving more than 20,000 young people. The Lifelong Kindergarten group continues to
develop new technologies, introduce new educational approaches, and lead
professional-development workshops for Clubhouses around the world.
Alumni Contributors: rberg, Leo Burd, Robbin Chapman, Rachel Garber, Tim Gorton,
Michelle Hlubinka, Elisabeth Sylvan and Claudia Urrea

MIT Media Lab

October 2015

Page 67

325. Computer Clubhouse


Village

Chris Garrity, Natalie Rusk, and Mitchel Resnick


The Computer Clubhouse Village is an online community that connects people at
Computer Clubhouse after-school centers around the world. Through the Village,
Clubhouse members and staff at more than 100 Clubhouses in 20 countries can share
ideas with one another, get feedback and advice on their projects, and work together on
collaborative design activities.
Alumni Contributors: Robbin Chapman, Rachel Garber and Elisabeth Sylvan

326. Duct Tape Network

Leo Burd, Rachel Garber, Alisha Panjwani, and Mitchel Resnick


The Duct Tape Network (DTN) is a series of fun, hands-on maker workshops that
encourage young children (ages 7-10) to use cardboard, tape, wood, fabric, LED lights,
motors, and more to bring their stories and inventions to life. We are designing an
educational framework and toolkit to engage kids in the creation of things that they care
about before they lose their curiosity or get pulled in by more consumer-oriented
technology. DTN workshops include team challenges and tinkering time. Come check
them out!

327. Family Creative


Learning

Ricarose Roque, Natalie Rusk, and Mitchel Resnick

328. Learning Creative


Learning

Mitchel Resnick, Philipp Schmidt, Natalie Rusk, Grif Peterson, Katherine


McConachie, Srishti Sethi, Alisha Panjwani

In Family Creative Learning, we engage parents and children in workshops to design


and learn together with creative technologies, like the Scratch programming language
and the MaKey MaKey invention kit. Just as children's literacy can be supported by
parents reading with them, children's creativity can be supported by parents creating
with them. In these workshops, we especially target families with limited access to
resources and social support around technology. By promoting participation across
generations, these workshops engage parents in supporting their children in becoming
creators and full participants in today's digital society.

Learning Creative Learning (http://learn.media.mit.edu/lcl) is an online course that


introduces ideas and strategies for supporting creative learning. The course engages
educators, designers, and technologists from around the world in applying creative
learning tools and approaches from the MIT Media Lab. We view the course as an
experimental alternative to traditional Massive Open Online Courses (MOOCs), putting
greater emphasis on peer-to-peer learning, hands-on projects, and sustainable
communities.

329. Learning with Data

Sayamindu Dasgupta, Natalie Rusk, Mitchel Resnick


More and more computational activities revolve around collecting, accessing, and
manipulating large sets of data, but introductory approaches for learning programming
typically are centered around algorithmic concepts and flow of control, not around data.
Computational exploration of data, especially data-sets, has been usually restricted to
predefined operations in spreadsheet software like Microsoft Excel. This project builds
on the Scratch programming language and environment to allow children to explore
data and datasets. With the extensions provided by this project, children can build
Scratch programs to not only manipulate and analyze data from online sources, but
also to collect data through various means such as surveys and crowd-sourcing. This
toolkit will support many different types of projects like online polls, turn-based
multiplayer games, crowd-sourced stories, visualizations, information widgets, and
quiz-type games.

330. Lemann Creative


Learning Program
NEW LISTING

Page 68

Mitchel Resnick and Leo Burd


The Lemann Creative Learning Program is a collaboration between the MIT Media Lab
and the Lemann Foundation to foster creative learning in Brazilian public education.
Established in February 2015, the program designs new technologies, support
materials, and innovative initiatives to engage Brazilian public schools, afterschool

October 2015

MIT Media Lab

centers, and families in learning practices that are more hands-on and centered on
students' interests and ideas. For additional information, please contact
[email protected].

331. Libranet

Philipp Schmidt, Katherine McConachie


Libranet is a model for facilitating in-person study groups at community libraries. Aimed
at adult learners, Libranet seeks to take advantage of libraries as free, open community
spaces for learning. This model utilizes open, online course material and pairs it with a
study group format to foster deeper, more meaningful adult basic educational
experiences.

332. Making Learning


Work

J. Philipp Schmidt, Juliana Nazare, Katherine McConahie

333. Making with Stories

Alisha Panjwani, Natalie Rusk, Mitchel Resnick

Improving adult learning, especially for adults who are unemployed or unable to
financially support their families, is a challenge that affects the future wellbeing of
millions of individuals in the US. We are working with the Joyce Foundation, employers,
learning researchers, and the Media Lab community to prototype three to five new
models for adult learning that involve technology innovation and behavioral insights.

We are developing a set of participatory "maker" activities to engage youth in creating


tangible projects that depict stories about themselves and their worlds. These activities
introduce electronics and computational tools as a medium to create, connect, express,
and derive meaning from personal narratives. For example, we are offering workshops
where participants design sewable circuits and bring them together to create a
collaborative Story Quilt. Through the Making with Stories project we are exploring how
story-based pedagogy can inspire youth participation in arts and engineering within
formal and informal learning environments.

334. Media Lab Digital


Certificates
NEW LISTING

Philipp Schmidt, Juliana Nazare, Katherine McConachie, Srishti Sethi, and Guy
Zyskind
The Media Lab will award certificates to members of our community that are outside of
the academic program. A project from the Learning Learning initiative, the certificates
are registered on the blockchain, cryptographically signed, and tamper proof. These
certificates can be designed to represent different contributions or recognition. What
they stand for is included in the certificate. Through these objects we will critically
explore notions of social capital and reputation, empathy and gift economies, and social
behavior. We are also developing a blueprint/model for other organizations to start
doing the same. The code is open source so that others can experiment with the idea of
digital certificates. Those certificates would have no connection to the Media Lab.

335. Media Lab Virtual


Visit

Srishti Sethi and J. Philipp Schmidt

336. ML Online Learning

Philipp Schmidt and Mitchel Resnick

Media Lab Virtual Visit is intended to open up the doors of the Media Lab to people
from all around the world. The visit is hosted on the Unhangout platform, a new way of
running large-scale unconferences on the web that was developed at the Media Lab. It
is an opportunity for students or potential collaborators to talk with current researchers
at the Lab, learn about their work, and share ideas.

Learning for everyone, by everyone. The Open Learning project builds online learning
communities that work like the web: peer-to-peer, loosely joined, open. It works with
Media Lab faculty and students to open up the magic of the Lab through online
learning. Our first experiment was Learning Creative Learning, a course taught at the
Media Lab, which attracted 24,000 participants. We are currently developing ideas for
massive citizen science projects, engineering competitions for kids, and new physical
infrastructures for learning that reclaim the library.

MIT Media Lab

October 2015

Page 69

337. Para

Jennifer Jacobs, Mitchel Resnick, Joel Brandt, Sumit Gogia, and Radomir Mech
Procedural representations, enabled through programming, are a powerful tool for
digital illustration, but writing code conflicts with the intuitiveness and immediacy of
direct manipulation. Para is a digital illustration tool that uses direct manipulation to
define and edit procedural artwork. Through creating and altering vector paths, artists
can define iterative distributions, parametric constraints, and conditional behaviors.
Para makes it easier for people to create generative artwork, and creates an intuitive
workflow between manual and procedural drawing methods.

338. Read Out Loud

J. Philipp Schmidt and Juliana Nazare


Read Out Loud is an application that empowers adults learning English to turn almost
any reading material into an experience to help them learn. Learners can take a picture
of a page of text; the app then scans in the page and presents the learner with a host of
additional tools to facilitate reading. They can read the text aloud, which helps learners
who are more comfortable with spoken English understand what is written. They can
also select words to translate them into their native language. With this prototype, we
want to give adult learners more agency to learn from material that focuses on subjects
they care about, as well as increase access to English language learning material. Any
book from the public library could become learning material with support in their native
language. Read Out Loud is a prototype within the larger Making Learning Work
project.

339. Scratch

Mitchel Resnick, Natalie Rusk, Kasia Chmielinski, Andrew Sliwinski, Eric


Schilling, Carl Bowman, Ricarose Roque, Sayamindu Dasgupta, Ray Schamp,
Matt Taylor, Chris Willis-Ford, Juanita Buitrago, Carmelo Presicce, Moran Tsur,
Brian Silverman, Paula Bonta
Scratch is a programming language and online community (http://scratch.mit.edu) that
makes it easy to create your own interactive stories, games, animations, and
simulations and share your creations online. As young people create and share Scratch
projects, they learn to think creatively, reason systematically, and work collaboratively,
while also learning important mathematical and computational ideas. Young people
around the world have shared more than 10 million projects on the Scratch website,
with thousands of new projects every day. (For information on who has contributed to
Scratch, see the Scratch Credits page: http://scratch.mit.edu/info/credits/).
Alumni Contributors: Amos Blanton, Karen Brennan, Gaia Carini, Michelle Chung,
Shane Clements, Margarita Dekoli, Evelyn Eastmond, Champika Fernando, John H.
Maloney, Amon Millner, Andres Monroy-Hernandez, Eric Rosenbaum, Jay Saul Silver
and Tamara Stern

340. Scratch Data Blocks

Sayamindu Dasgupta, Mitchel Resnick, Natalie Rusk, Benjamin Mako Hill


Scratch Data Blocks is an NSF-funded project that extends the Scratch programming
language to enable youth to analyze and visualize their own learning and participation
in the Scratch online community. With Scratch Data Blocks, youth in the Scratch
community can easily access, analyze, and represent data about the ways they
program, share, and discuss Scratch projects.

Page 70

October 2015

MIT Media Lab

341. Scratch Day

Saskia Leggett, Lisa O'Brien, Kasia Chmielinski, Carl Bowman, and Mitchel
Resnick
Scratch Day (day.scratch.mit.edu) is a network of face-to-face local gatherings, on the
same day in all parts of the world, where people can meet, share, and learn more about
Scratch, a programming environment that enables people to create their own interactive
stories, games, animations, and simulations. We believe that these types of
face-to-face interactions remain essential for ensuring the accessibility and
sustainability of initiatives such as Scratch. In-person interactions enable richer forms of
communication among individuals, more rapid iteration of ideas, and a deeper sense of
belonging and participation in a community. The first Scratch Day took place in 2009. In
2014, there were 260 events in 56 countries.
Alumni Contributor: Karen Brennan

342. Scratch Extensions

Chris Willis-Ford, Andrew Sliwinski, Sayamindu Dasgupta, Mitchel Resnick


The Scratch extension system enables anyone to extend the Scratch programming
language through custom programming blocks written in JavaScript. The extension
system is designed to enable innovating on the Scratch programming language itself, in
addition to innovating with it through projects. With the extension system, anyone can
write custom Scratch blocks that enable others to use Scratch to program hardware
devices such as the LEGO WeDo, get data from online web-services such as
weather.com, and use advanced web-browser capabilities such as speech recognition.
Alumni Contributors: Amos Blanton, Shane Clements, Abdulrahman Y. idlbi and John
H. Maloney

343. ScratchJr

Mitchel Resnick, Marina Bers, Chris Garrity, Tim Mickel, Paula Bonta, and Brian
Silverman
ScratchJr makes coding accessible to younger children (ages 5-7), enabling them to
program their own interactive stories, games, and animations. To make ScratchJr
developmentally appropriate for younger children, we revised the interface and
provided new structures to help young children learn relevant math concepts and
problem-solving strategies. ScratchJr is available as a free app for iPads and Android.
ScratchJr is a collaboration between the MIT Media Lab, Tufts University, and Playful
Invention Company.
Alumni Contributors: Sayamindu Dasgupta and Champika Fernando

344. Spin

Tiffany Tseng and Mitchel Resnick


Spin is a photography turntable system that lets you capture how your DIY projects
come together over time. With Spin, you can create GIFs and videos of your projects
that you can download and share on Twitter, Facebook, or any other social network.

345. Start Making!

Alisha Panjwani, Natalie Rusk, Jie Qi, Chris Garrity, Tiffany Tseng, Jennifer
Jacobs, Mitchel Resnick
The Lifelong Kindergarten group is collaborating with the Museum of Science in Boston
to develop materials and workshops that engage young people in "maker" activities in
Computer Clubhouses around the world, with support from Intel. The activities
introduce youth to the basics of circuitry, coding, crafting, and engineering. In addition,
graduate students are testing new maker technologies and workshops for Clubhouse
staff and youth. The goal of the initiative is to help young people from under-served
communities gain experience and confidence in their ability to design, create, and
invent with new technologies.
Alumni Contributors: David A. Mellis and Ricarose Roque

MIT Media Lab

October 2015

Page 71

346. Unhangout

Philipp Schmidt, Drew Harry, Charlie DeTar, Srishti Sethi, and Katherine
McConachie
Unhangout is an open-source platform for running large-scale unconferences online.
We use Google Hangouts to create as many small sessions as needed, and help users
find others with shared interests. Think of it as a classroom with an infinite number of
breakout sessions. Each event has a landing page, which we call the lobby. When
participants arrive, they can see who else is there and chat with each other. The hosts
can do a video welcome and introduction that gets streamed into the lobby. Participants
then break out into smaller sessions (up to 10 people per session) for in-depth
conversations, peer-to-peer learning, and collaboration on projects. Unhangouts are
community-based learning instead of top-down information transfer.

Deb Roy: Social Machines


Building systems solutions for social change.

347. AINA: Aerial Imaging


and Network Analysis
NEW LISTING

348. Human Atlas


NEW LISTING

349. Journalism Mapping


and Analytics Project
(JMAP)

Page 72

Deb Roy and Neo (Mostafa) Mohsenvand


This project is aimed at building a machine learning pipeline that will discover and
predict links between the visible structure of villages and cities (using satellite and aerial
imaging) and their inhabiting social networks. The goal is to estimate digitally invisible
villages in India and Sub-Saharan Africa. By estimating the social structure of these
communities, our goal is to enable targeted intervention and optimized distribution of
information, education technologies, goods, and medical aid. Currently, this pipeline is
implemented using a GPU-powered Deep Learning system. It is able to detect buildings
and roads and provide detailed information about the organization of the villages. The
output will be used to construct probabilistic models of the underlying social network of
the village. Moreover, it will provide information on the population, distribution of wealth,
rate and direction of development (when longitudinal imaging data is available), and
disaster profile of the village.
Deb Roy, Martin Saveski, Soroush Vosoughi and Eric Chu
This project aims to map and analyze the publicly knowable social connections of
various communities, allowing us to gain unprecedented insights about the social
dynamics in such communities. Most analyses of this sort map online social networks,
such as Twitter, Facebook, or LinkedIn. While these networks encode important
aspects of our lives (e.g., our professional connections) they fail to capture many
real-world relationships. Most of these relationships are, in fact, public and known to the
community members. By mapping this publicly knowable graph, we get a unique view
of the community that allows us to gain deeper understanding of its social dynamics. To
this end, we built a web-based tool that is simple, easy to use, and allows the
community to map itself. Our goal is to deploy this tool in communities of different sizes,
including the Media Lab community and the Spanish town of Jun.
Deb Roy, Sophie Chou, Pau Kung, Neo Mohsenvand, William Powers
Over the last two decades, digital technologies have flattened old hierarchies in the
news business and opened the conversation to a multitude of new voices. To help
comprehend this promising but chaotic new public sphere, we're building a "social news
machine" that will provide a structured view of the place where journalism meets social
media. The basis of our project is a two-headed data ingest. On one side, all the news
published online 24/7 by a sample group of influential US media outlets. On the other,
all Twitter comments of the journalists who produced the stories. The two streams will
be joined through network analysis and algorithmic inference. In future work we plan to
expand the analysis to include all the journalism produced by major news outlets and
the overall public response on Twitter, shedding new light on such issues as bias,
originality, credibility, and impact.

October 2015

MIT Media Lab

350. Responsive
Communities: Pilot
Project in Jun, Spain

Deb Roy, Martin Saveski, William Powers

351. Rumor Gauge:


Automatic Detection
and Verification of
Rumors in Twitter

Soroush Vosoughi and Deb Roy

352. Social Literacy


Learning

Deb Roy, Anneli Hershman, Ivan Sysoev and Preeta Bansal

To gain insights into how digital technologies can make local governments more
responsive and deepen citizen engagement, we are studying the Spanish town of Jun
(population 3,500). For the last four years, Jun has been using Twitter as its principal
medium for citizen-government communication. We are mapping the resulting social
networks and analyzing the dynamics of the Twitter interactions, in order to better
understand the initiative's impact on the town. Our long-term goal is to determine
whether the system can be replicated at scale in larger communities, perhaps even
major cities.

The spread of malicious or accidental misinformation in social media, especially in


time-sensitive situations such as real-world emergencies, can have harmful effects on
individuals and society. Motivated by this, we are creating computational models of
false and true information on Twitter to investigate the nature of rumors surrounding
real-world events. These models take into account the content, characteristics of the
people involved, and virality of information to predict veracity. The models have been
trained and evaluated on several real-world events, such as the 2013 Boston Marathon
bombings, the 2014 Ferguson riots, and the Ebola epidemic, with promising results. We
believe our system will have immediate real-world applications for consumers of news,
journalists, and emergency services, and that it can help minimize and dampen the
impact of misinformation.

While there are a number of literacy technology solutions developed for individuals, the
role of "social machines"--or social literacy learning--is less explored. We believe that
literacy is an inherently social activity that is best learned within a supportive community
network including peers, teachers and parents. By creating technology that is
child-driven, machine-guided, we hope to empower human learning networks in order
to establish a nutrient medium for literacy growth while enhancing personal, creative,
and autonomous interactions within communities. We are planning to pilot and deploy
our solutions in two environments: (1) Boston: We are planning to create a cross-age
peer mentorship program to engage students from different communities in socially
collaborative, self-expressive literacy learning opportunities via mobile devices. (2)
India: We are planning to use mobile devices to create hyperlocal learning networks in
rural areas where the need for literacy is particularly acute.
Alumni Contributor: Prashanth Vijayaraghavan

353. The Electome:


Measuring
Responsiveness in
the 2016 Election
NEW LISTING

MIT Media Lab

Deb Roy, Russell Stevens, Soroush Vosoughi, William Powers, Sophie Chou,
Perng-Hwa Kung, Neo (Mostafa) Mohsenvand, Raphael Schaad and Prashanth
Vijayaraghavan
The Electome project is a comprehensive mapping of the content and network
connections among the campaign's three core public sphere voices: candidates (and
their advocates), media (journalists and other mainstream media sources), and the
public. This mapping is used to trace the election's narratives as they form, spread,
morph, and decline among these three groups -identifying who and what influences
these dynamics. We are also developing metrics that measure narrative alignment and
misalignment among the groups, sub-groups (political party, media channel, advocacy
group, etc.), and specific individuals/organizationss (officials, outlets, journalists,
influencers, sources, etc.). The Electome can be used to promote more responsive
elections by deploying analyses, metrics, and data samples that improve the exchange
of ideas among candidates, the media, and/or the public in the public sphere of an
election.

October 2015

Page 73

Chris Schmandt: Living Mobile


Enhancing mobile life through improved user interactions.

354. Activ8

Misha Sra and Chris Schmandt


Activ8 is a system of three short games: See-Saw, a balancing game for Glass; Jump
Beat, a music beat matching game for Glass; and Learning to Fly, a Kinect game where
users keep a virtual bird in the air by flapping their arms. Recent epidemiological
evidence points at sitting as being the most common contributor to an inactive lifestyle.
We aim to offer a starting point towards designing and building an understanding about
how "physical casual games" can contribute to helping address the perils of sitting.

355. Amphibian:
Terrestrial Scuba
Diving Using Virtual
Reality
NEW LISTING

Harpreet Sareen, Jingru Guo, Misha Sra, Chris Schmandt, Dhruv Jain, Raymond
Wu and Rodrigo Victor De Melo Marques
Oceans are home to more biodiversity than anywhere else on this planet. Scuba diving
as a sport has helped us explore beautiful corals, striking fishes, and magnificent
wrecks. However, ocean diving is mentally and physically challenging. Efforts in
research literature have virtually simulated maritime scenes in comfortable
environments. Most of these simulations are only virtual projections on a computer
screen or enclosed walls, so a realistic feeling of diving is never achieved. Few
systems, however realistic, require people to swim in a pool or tank, which is
cumbersome and tedious. Amphibian is a virtual reality system that allows the user to
experience the ocean environment in a convenient terrestrial setting.

356. Kaan: Wristband to


Educate Deaf
Children about Social
Norms

Dhruv Jain and Chris Schmandt

357. Meta-Physical-Space
VR

Misha Sra and Chris Schmandt

NEW LISTING

358. MugShots

For young children with hearing loss, learning is difficult and their social development is
often at risk. This physical impairment causes changes in their social behavior, and
they may be deemed "socially awkward" from a task as simple as setting down an
object. It is hard for them to sense the sound distinction to know if an object was set
down in a polite manner or if it was disruptive to others. Kaan is a wearable wristband
to signal and alert a person with hearing loss if the sound emitted from setting down an
object is disruptive. The long-term goal is to educate children on acceptable ranges of
motions of objects in their everyday lives. The current device consists of a wearable
wristband that vibrates and emits light if the loudness of the sound around it goes
beyond a certain threshold.

Experience new dimensions and worlds without limits with friends. Encounter a new
physical connection within the virtual world. Explore virtual spaces by physically
exploring the real world. Interact with virtual objects by physically interacting with
real-world objects. Initial physical sensations include touching objects, structures, and
people while we work on adding sensations for feeling pressure, temperature, moisture,
smell, and other sensory experiences.
Cindy Hsin-Liu Kao and Chris Schmandt
MugShots enables visual communication though everyday objects. We embed a small
display into a coffee mug, an object with frequent daily use. Targeted for the workplace,
the mug transitions between different communication modes in public and private
spaces. In the private office space, the mug is an object for intimate communication
between remote friends; users receive emoticon stickers via the display. When brought
to a public area, the mug switches to a pre-selected image of the user's choice, serving
as a social catalyst to trigger conversations in public spaces.

Page 74

October 2015

MIT Media Lab

359. NailO

Cindy Hsin-Liu Kao, Artem Dementyev, Joe Paradiso, Chris Schmandt


NailO is a nail-mounted gestural input surface inspired by commercial nail stickers.
Using capacitive sensing on printed electrodes, the interface can distinguish on-nail
finger swipe gestures with high accuracy (>92 percent). NailO works in real time: the
system is miniaturized to fit on the fingernail, while wirelessly transmitting the sensor
data to a mobile phone or PC. NailO allows for one-handed and always-available input,
while being unobtrusive and discrete. The device blends into the user's body, is
customizable, fashionable, and even removable.

360. OnTheGo

Misha Sra, Chris Schmandt


As mobile device screens continue to get smaller (smartwatches, head-mounted
devices like Google Glass), touch-based interactions with them become harder. With
OnTheGo, our goal is to complement touch- and voice-based input on these devices by
adding interactions through in-air gestures around the devices. Gestural interactions
are not only intuitive for certain situations where touch may be cumbersome like
running, skiing, or cooking, but are also convenient for things like quick application and
task management, certain types of navigation and interaction, and simple inputs to
applications.

361. Spotz

Chris Schmandt and Misha Sra


Exploring your city is a great way to make friends, discover new places, find new
interests, and invent yourself. Spotz is an Android app where everyone collectively
defines the places they visit and the places in turn define them. Spotz allows you to
discover yourself by discovering places. You tag a spot and create some buzz for it; if
everyone agrees the spot is fun this bolsters your "fun" quotient. If everyone agrees the
spot is geeky it pushes up your "geeky" score. Thus emerges your personal tag cloud.
Follow tags to chance upon new places. Find people with similar tag clouds as your
own and experience new places together. Create buzz for your favorite spots and track
other buzz to find who has the #bestchocolatecake in town!

362. Variable Reality:


Interaction with the
Virtual Book

Hye Soo Yang and Chris Schmandt


Variable Reality is an augmented reality system designed for reading digital and
physical books more intuitively and efficiently. Through a head-worn display device
such as Oculus Rift, the user is able to instantly access and display any desired book
contents onto either a real book or a hand, depending on the need and affordability.
Quick hand gestures integrated with the system further facilitate natural user
interactions.

Kevin Slavin: Playful Systems


Designing systems that become experiences to transcend utility and usability.

363. Tools for


Super-Human Time
Perception

MIT Media Lab

Kevin Slavin and Che-Wei Wang


Time perception is a fundamental component in our ability to build mental models of our
world. Without accurate and precise time perception, we might have trouble
understanding speech, fumble social interactions, have poor motor control, hallucinate,
or remember events incorrectly. Slight distortions in time perception are commonplace
and may lead to slight dyslexia, memory shifts, poor eye-hand coordination, and other
relatively benign symptoms, but could a diminishing sense of time signal the onset of a
serious brain disorder? Could time perception training help prevent or reverse brain
disorders? This project is a series of experimental tools built to assist and increase
human time perception. By approaching time-perception training from various
perspectives, we hope to find a tool or collection of tools to increase time perception,
and in turn discover what an increase in time perception might afford us.

October 2015

Page 75

364. 20 Day Stranger

Kevin Slavin, Julie Legault, Taylor Levy, Che-Wei Wang, Dalai Lama Center for
Ethics and Transformative Values and Tinsley Galyean
20 Day Stranger is a mobile app that creates an intimate and anonymous connection
between you and another person. For 20 days, you get continuous updates about
where they are, what they are doing, and eventually even how they are feeling, and
them likewise about you. But you will never know who this person is. Does this change
the way you think about other people you see throughout your day, any one of which
could be your stranger?

365. 32,768 Times Per


Second

Kevin Slavin and Taylor Levy

366. Amino: A Tamagotchi


for Synthetic Biology

Kevin Slavin and Julie Legault

367. AutomaTiles

Kevin Slavin and Jonathan Bobrow

NEW LISTING

368. beneath the chip

The crystal oscillator inside a quartz wristwatch vibrates at 32,768 times per second.
This is too fast for a human to perceive, and it's even more difficult to imagine its
interaction with the mechanical circulation of a clock. 32,768 Times Per Second is a
diagrammatic, procedural, and fully functional sculpture of the electro-mechanical
landscape inside a common wristwatch. Through a series of electronic transformations,
the signal from a crystal is broken down over and over, and then built back up to the
human sense of time.

Amino is a design-driven mini-lab that allows users to carry out a bacterial


transformation and enables the subsequent care and feeding of the cells that are
grown. Inspired by Tamagotchis, the genetic transformation of an organism's DNA is
performed by the user through guided interactions, resulting in a synthetic organism
that can be cared for like a pet. Amino is developed using low-cost ways of carrying out
lab-like procedures in the home, and is packaged in a suitcase-sized continuous
bioreactor for cells.

A tabletop set of cellular automata ready to exhibit complex systems through simple
behaviors, AutomaTiles explores emergent behavior through tangible objects.
Individually they live as simple organisms, imbued with a simple personality; together
they exhibit something "other" than the sum of their parts. Through communication with
their neighbors, complex interactions arise. What will you discover with AutomaTiles?
Kevin Slavin and Taylor Levy
Sculptural artifacts that model and reveal the embedded history of human thought and
scientific principles hidden inside banal digital technologies. These artifacts provide
alternative ways to engage and understand the deepest interior of our everyday
devices, below the circuit, below the chip. They build a sense of the machines within
the machine, the material, the grit of computation.

369. Case and Molly

Gregory Borenstein
Case and Molly is a prototype for a game inspired by (and in homage to) William
Gibson's novel Neuromancer. It's about the coordination between virtual and physical,
"cyberspace" and "meat." We navigate the tension between our physical surroundings
and our digital networks in a state of continuous partial attention; Case and Molly uses
the mechanics and aesthetics of Neuromancer to explore this quintessential
contemporary dynamic. The game is played by two people mediated by smartphones
and an Oculus Rift VR headset. Together, and under time pressure, they must navigate
Molly through physical space using information that is only available to Case. In the
game, Case sees Molly's point of view in immersive 3D, but he can only communicate a
single bit of information to her. Meanwhile, Molly traverses physical obstacles hoping
Case can solve abstract puzzles in order to gain access to the information she needs.

Page 76

October 2015

MIT Media Lab

370. Cordon Sanitaire

Kevin Slavin
Named for, and inspired by, the medieval practice of erecting barriers to prevent the
spread of disease, Cordon Sanitaire is a collaborative, location-based mobile game in
which players seek to isolate an infectious "patient zero" from the larger population.
Every day, the game starts abruptly synchronizing all players at once and lasts for two
minutes. In 60 seconds, players must choose either to help form the front line of a
quarantine, or remain passive. Under pressure, the "uninfected" attempt to collaborate
without communication, seeking to find the best solution for the group. When those 60
seconds end, a certain number of players are trapped inside with patient zero, and the
score reflects the group's ability to cooperate under duress.

371. Darkball

Che-Wei Wang
Cristiano Ronaldo can famously volley a corner kick in total darkness. The magic
behind this remarkable feat is hidden in Ronaldo's brain, which enables him to use
advance cues to plan upcoming actions. Darkball challenges your brain to do the same,
distilling that scenario into its simplest form intercept a ball in the dark. All you see is all
you need.

372. DeepView:
Computational Tools
for Chess
Spectatorship

Gregory Borenstein, Kevin Slavin, and Maurice Ashley

373. Designing Immersive


Multi-Sensory Eating
Experiences

Kevin Slavin and Janice Wang

374. Dice++

Kevin Slavin and Jonathan Bobrow

Competitive chess is an exciting spectator sport. It is fast-paced, dynamic, and deeply


psychological. Unfortunately, most of the game's drama is only visible to spectators
who are themselves expert chess players. DeepView seeks to use computational tools
to make the drama of high-level chess accessible to novice viewers. There is a long
tradition of software trying to beat human players at chess; DeepView takes advantage
of algorithmic tools created in the development of advanced chess engines such as
Deep Blue, but instead uses them to understand and explain the styles of individual
players and the dynamics of a given match. It puts into the hands of chess
commentators powerful data science tools that can calculate player position
preferences and likely game outcomes, helping commentators to better explain the
exciting human story inside every match.

Food offers a rich multi-modal experience that can deeply affect emotion and memory.
We're interested in exploring the artistic and expressive potential of food beyond mere
nourishment, as a means of creating memorable experiences that involve multiple
senses. For instance, music can change our eating experience by altering our emotions
during the meal, or by evoking a specific time and place. Similarly, sight, smell, and
temperature can all be manipulated to combine with food for expressive effect. In
addition, by drawing upon people's physiology and upbringing, we seek to create
individual, meaningful sensory experiences. Specifically, we are exploring the
connection between music and flavor perception.

Today, algorithms drive our cars, our economy, what we read, and how we play.
Modern-day computer games utilize weighted probabilities to make games more
competitive, fun, and addicting. In casinos, slot machines--once a product of simple
probability--employ similar algorithms to keep players playing. Dice++ takes the
seemingly straight probability of rolling a die and determines an outcome with
algorithms of its own.

MIT Media Lab

October 2015

Page 77

375. EyeWire

Sebastian Seung, Kevin Slavin, Gregory Borenstein, Taylor Levy, David Robert,
Che-Wei Wang and Seung Lab (MIT BCS)
The Seung Lab at MIT's Brain + Cognitive Sciences Department has developed
EyeWire, a game to map the brain. To date, it has attracted an online community of
over 50,000 "citizen neuroscientists" who are mapping the 3D structure of neurons and
discovering neural connections. Playful Systems is collaborating with the Seung Lab to
reconsider EyeWire as a large scale mass-appeal mobile game to attract 1MM players
or more. We are currently developing mobile, collaborative game mechanics, and
shifting the focus to short-burst gameplay.

376. GAMR
NEW LISTING

377. Homeostasis

Shoshannah Tekofsky and Kevin Slavin


Does how you play reflect who you really are? The Media Lab and Tilburg University
are bringing science into the game to figure out the connections between our play style
and our cognitive traits. To do that, we are gathering data from League of Legends,
World of Warcraft, and Battlefield 4, and Battlefield: Hardline players to gain insights
across all the major online game genres (MOBA, MMORPG, and FPS). In return, every
participant will get an in-depth GAMR profile that shows their personality, brain type,
and gamer type.
Kevin Slavin, Kamal Farah, Julie Legault and Denis Bozic
A large-scale art installation that investigates the biological systems that represent and
embody human life, and their relationship to the built environment. This synthetic
organism built from interconnected microbiological systems will be sustained in part
through its own feedback and feedforward loops, but also through interactions with the
architectural systems (like HVAC). As the different systems react and exchange
material inputs and outputs, they move towards homeostasis. In the process,
Homeostasis creates a new landscape of the human body, in which we can experience
the wonder and vulnerability of its interconnected systems.

378. MicroPsi: An
Architecture for
Motivated Cognition

Joscha Bach

379. radiO_o

Kevin Slavin, Mark Feldmeier, Taylor Levy, Daniel Novy and Che-Wei Wang

The MicroPsi project explores broad models of cognition, built on a motivational system
that gives rise to autonomous social and cognitive behaviors. MicroPsi agents are
grounded AI agents, with neuro-symbolic representations, affect, top-down/bottom-up
perception, and autonomous decision making. We are interested in finding out how
motivation informs social interaction (cooperation and competition, communication and
deception), learning, and playing; shapes personality; and influences perception and
creative problem-solving.

radiO_o is a battery-powered speaker worn by hundreds of party guests, turning each


person into a local mobile sound system. The radiO_o broadcast system allows the DJ
to transmit sounds over several pirate radio channels to mix sounds between hundreds
of speakers roaming around the space and the venue's existing sound system.

380. Sneak: A Hybrid


Digital-Physical
Tabletop Game

Page 78

Greg Borenstein and Kevin Slavin


Sneak is a hybrid digital tabletop game for two-to-four players about deception, stealth,
and social intuition. Each player secretly controls one agent in a procedurally generated
supervillain lair. Their mission is to find the secret plans and escape without getting
discovered, shot, or poisoned by another player. To accomplish this, players must
interact and blend in with a series of computer-controlled henchmen while keeping a
close eye on their human opponents for any social cues that might reveal their identity.
Sneak introduces a number of systems that are common in video games, but were
impractical in tabletop games that did not deeply integrate a smartphone app. These
include procedural map generation, NPC pathfinding, dynamic game balancing, and the
use of sound.

October 2015

MIT Media Lab

381. Soft Exchange:


Interaction Design
with Biological
Interfaces

Kevin Slavin and Kamal Farah

382. Storyboards

Sepandar Kamvar, Kevin Slavin, Jonathan Bobrow and Shantell Martin

The boundaries and fabric of human experience are continuously redefined by


microorganisms, interacting at an imperceptible scale. Though hidden, these systems
condition our bodies, environment, and even sensibilities and desires. The proposed
works introduce a model of interaction in which the microbiome is an extension of the
human sensory system, accessed through a series of biological interfaces that enable
exchange. Biological Interfaces transfer discrete behaviors of microbes into information
across scales, where it may be manipulated, even if unseen. In the same way the field
of HCI has articulated our exchanges with electronic signals, Soft Exchange opens up
the question of how to design for this other invisible, though present, and vital material.

Giving opaque technology a glass house, Storyboards present the tinkerers or owners
of electronic devices with stories of how their devices work. Just as the circuit board is a
story of star-crossed lovers Anode and Cathode with its cast of characters (resistor,
capacitor, transistor), Storyboards have their own characters driving a parallel visual
narrative.

383. Troxes
NEW LISTING

Jonathan Bobrow
The building blocks we grow up with and the coordinate systems we are introduced to
at an early age shape the design space with which we think. Complex systems are
difficult to understand because they often require transition from one coordinate system
to another. We could even begin to say that empathy is precisely this ability to map
easily to many different coordinates. Troxes is a building blocks kit based on the
triangle, where kids get to build their building blocks and then assemble Platonic and
Archimedean solids.

Ethan Zuckerman: Civic Media


Creating, deploying, and evaluating tools and practices that foster civic participation and
the flow of information within and between communities.

384. "Make the Breast


Pump Not Suck!"
Hackathon

Tal Achituv, Catherine D'Ignazio, Alexis Hope, Taylor Levy, Alexandra Metral,
Che-Wei Wang

385. Action Path

Erhardt Graeff, Rahul Bhargava, Emilie Reiser and Ethan Zuckerman

In September 2014, 150 parents, engineers, designers, and healthcare practitioners


gathered at the MIT Media Lab for the "Make the Breast Pump Not Suck!" Hackathon.
As one of the midwives at our first hackathon said, "Maternal health lags behind other
sectors for innovation." This project brought together people from diverse fields,
sectors, and backgrounds to take a crack at making life better for moms, babies, and
new families.

Action Path is a mobile app to help citizens provide meaningful feedback about
everyday spaces in their cities by inviting engagement when near current issues.
Existing platforms for civic engagement, whether online or offline, are inconvenient and
disconnected from the source of issues they are meant to address. Action Path
addresses barriers to effective civic engagement in community matters by ensuring
more citizens have their voices heard on how to improve their local
communities--converting individual actions into collective action and providing context
and a sense of efficacy, which can help citizens become more effective through regular
practice and feedback.

MIT Media Lab

October 2015

Page 79

386. Civic Crowdfunding


Research Project

Ethan Zuckerman and Rodrigo Davies

387. Code4Rights

Joy Buolamwini

NEW LISTING

388. Codesign Toolkit

The Civic Crowdfunding project is an initiative to collect data and advance social
research into the emerging field of civic crowdfunding, the use of online crowdfunding
platforms to provide services to communities. The project aims to bring together folks
from across disciplines and professions--from research and government to the tech
sector and community organizations--to talk about civic crowdfunding and its benefits,
challenges, and opportunities. It combines qualitative and quantitative research
methods, from analysis of the theory and history of crowdfunding to fieldwork-based
case studies and geographic analysis of the field.

Code4Rights promotes human rights through technology education. By facilitating the


development of rights-focused mobile applications in workshops and an online course,
Code4Rights enables participants to create meaningful technology for their
communities in partnership with local organizations. For example, Code4Rights, in
collaboration with It Happens Here, a grassroots organization focused on addressing
sexual violence, created the First Response Oxford App to address sexual violence at
Oxford University. Over 30 young women contributed to the creation of the app, which
provides survivors of sexual violence and friends of survivors with information about
optional ways to respond, essential knowledge about support resources, critical contact
details, and answers to frequently asked questions.
Sasha Costanza-Chock and Becky Hurwitz
Involving communities in the design process results in products that are more
responsive to a community's needs, more suited to accessibility and usability concerns,
and easier to adopt. Civic media tools, platforms, and research work best when
practitioners involve target communities at all stages of the process: iterative ideation,
prototyping, testing, and evaluation. In the codesign process, communities act as
codesigners and participants, rather than mere consumers, end-users, test subjects, or
objects of study. In the Codesign Studio, students practice these methods in a service
learning project-based studio, focusing on collaborative design of civic media with local
partners. The Toolkit will enable more designers and researchers to utilize the
co-design process in their work by presenting current theory and practices in a
comprehensive, accessible manner.
Alumni Contributor: Molly Sauter

389. Data Therapy

Ethan Zuckerman and Rahul Bhargava


As part of our larger effort to build out a suite of tools for community organizers, we are
helping to build their capacity to do their own creative data visualization and
presentation. New computer-based tools are lowering the barriers of entry for making
engaging and creative presentations of data. Rather than encouraging partnerships
with epidemiologists, statisticians, or programmers, we see an opportunity to build
capacity within small community organizations by using these new tools. This work
involves workshops, webinars, and writing about how to pick more creative ways to
present their data stories.

390. DataBasic
NEW LISTING

Page 80

Ethan Zuckerman, Rahul Bhargava and Catherine D'Ignazio


DataBasic is a suite of web-based tools that give people fun and relevant ways learn
how to work with data. Existing tools focus on operating on data quickly to create some
output, rather than focusing on helping learners understand how to work with data. This
fails the huge population of data literacy learners, who are trying to build their capacity
in various ways. Our tools focus on the user as learner. They provide introductory
activities, connect to people with fun sample datasets, and connect to other tools and
techniques for working with data. We strongly believe in building tools focused on
learners, and are putting those ideas into practice on these tools and activities.

October 2015

MIT Media Lab

391. DeepStream
NEW LISTING

Ethan Zuckerman, Gordon Mangum and Joe Goldbeck


From Ukraine to the US, citizens and journalists are increasingly choosing to livestream
civic events. But livestreams are currently hard to find and lack in-depth information
about the events being documented. Deepstream seeks to expand participation in this
emergent form of media by creating a platform for livestream curation. By searching
across streaming platforms and adding relevant blog posts, news stories, images,
tweets, and other media to livestreaming video, individuals or newsrooms can create
immersive and interactive viewing experiences. Our goal is to design a platform that
lets people re-imagine the livestream viewing experience so that it connects viewers to
global events in a way that emphasizes local perspectives and deeper engagement,
while maintaining the experience of immediacy and authenticity that is an essential part
of livestreaming.

392. Digital Humanitarian


Marketplace

Matthew Stempeck

393. Erase the Border

Catherine D'Ignazio

The Internet has disrupted the aid sector like so many other industries before it. In
times of crisis, donors are increasingly connecting directly with affected populations to
provide participatory aid. The Digital Humanitarian Marketplace aggregates these digital
volunteering projects, organizing them by crisis and skills required to help coordinate
this promising new space.

Erase the Border is a web campaign and voice petition platform. It tells the story of the
Tohono O'odham people, whose community has been divided along 75 miles of the
US-Mexico border by a fence. The border fence divides the community, prevents tribe
members from receiving critical health services, and subjects O'odham to racism and
discrimination. This platform is a pilot that we are using to research the potential of
voice and media petitions for civic discourse.

394. First Upload


NEW LISTING

395. FOLD

Ethan Zuckerman, Matthew Carroll, Cynthia Fang and Joe Goldbeck


First Upload is a tool for verifying the authenticity of news imagery. It helps find the first
upload of imagery, particularly videos. Finding the person who uploaded a video is a
key to determining authenticity, because often it is necessary to contact that person
directly. It is being developed with input from YouTube and Bloomberg. Currently we
have a working prototype, built for the YouTube site.
Alexis Hope, Kevin Hu, Joe Goldbeck, Nathalie Huynh, Matthew Carroll, Cesar A.
Hidalgo, Ethan Zuckerman
FOLD is an authoring and publishing platform for creating modular, multimedia stories.
Some readers require greater context to understand complex stories. Using FOLD,
authors can search for and add "context cards" to their stories. Context cards can
contain videos, maps, tweets, music, interactive visualizations, and more. FOLD also
allows authors to link stories together by remixing context cards created by other
writers.

396. Framework for


Consent Policies

MIT Media Lab

Willow Brugh
This checklist is designed to help projects that include an element of data collection to
develop appropriate consent policies and practices. The checklist can be especially
useful for projects that use digital or mobile tools to collect, store, or publish data, yet
understand the importance of seeking the informed consent of individuals involved (the
data subjects). This checklist does not address the additional considerations necessary
when obtaining the consent of groups or communities, nor how to approach consent in
situations where there is no connection to the data subject. This checklist is intended
for use by project coordinators, and can ground conversations with management and
project staff in order to identify risks and mitigation strategies during project design or
implementation. It should ideally be used with the input of data subjects.

October 2015

Page 81

397. Global Brands

Anurag Gupta, Erhardt Graeff, Huan Sun, Yu Wang and Ethan Zuckerman
Every country has a brand, negative or positive, and that brand is mediated in part by
its global press coverage. We are measuring and ranking the perceptions of the 20
most populous countries by crowdsourcing those perceptions through a "World News
Quiz." Quiz-takers match geographically vague news stories to the countries they think
they occurred in, revealing how they positively or negatively perceive them. By
illustrating the way these biases manifest among English and Chinese speakers, we
hope to help news consumers and producers be more aware of the incomplete
portrayals they have internalized and propagated.

398. Mapping the Globe

Catherine D'Ignazio, Ethan Zuckerman and Ali Hashmi


Mapping the Globe is an interactive tool and map that helps us understand where the
Boston Globe directs its attention. Media attention matters in quantity and quality. It
helps determine what we talk about as a public and how we talk about it. Mapping the
Globe tracks where the paper's attention goes and what that attention looks like across
different regional geographies in combination with diverse data sets like population and
income. Produced in partnership with the Boston Globe.

399. Media Cloud

Hal Roberts, Ethan Zuckerman and David LaRochelle


Media Cloud is a platform for studying media ecosystems--the relationships between
professional and citizen media, between online and offline sources. By tracking millions
of stories published online or broadcast via television, the system allows researchers to
track the spread of memes, media framings, and the tone of coverage of different
stories. The platform is open source and open data, designed to be a substrate for a
wide range of communications research efforts. Media Cloud is a collaboration between
Civic Media and the Berkman Center for Internet and Society at Harvard Law School.

400. Media Cloud Brazil

Ethan Zuckerman, Alexandre Gonalves, Ronaldo Lemos, Carlos Affonso Pereira


de Souza, Hal Roberts, David Larochelle, Renato Souza, and Flavio Coelho
Media Cloud is a system that facilitates massive content analysis of news on the Web.
Developed by the Berkman Center for Internet and Society at Harvard University,
Media Cloud already analyzes content in English and Russian. During the last months,
we have been working on support for Portuguese content. We intend to analyze the
online debate on the most controversial and politically hot topics of the Brazilian Civil
Rights Framework for the Internet, namely network neutrality and copyright reform. At
the same time, we are writing a step-by-step guide to Media Cloud localization. In the
near future, we will be able to compare different media ecosystems around the world.

401. Media Meter

Ethan Zuckerman, J. Nathan Matias, Matt Stempeck, Rahul Bhargava and Dan
Schultz
What have you seen in the news this week? And what did you miss? Are you getting
the blend of local, international, political, and sports stories you desire? We're building a
media-tracking platform to empower you, the individual, and news providers
themselves, to see what you're getting and what you're missing in your daily
consumption and production of media. The first round of modules developed for the
platform allow you to compare the breakdown of news topics and byline gender across
multiple news sources.

402. Media Meter Focus

Muhammad Ali Hashmi


Media Meter Focus shows focus mapping of global media attention. What was covered
in the news this week? Did the issues you care about get the attention you think they
deserved? Did the media talk about these topics in the way you want them to? The
tool-set also shows news topics mapped against country locations.
Alumni Contributor: Catherine D'Ignazio

Page 82

October 2015

MIT Media Lab

403. Media Perspective

Edward L. Platt
Media Perspective brings a data visualization into 3D space. This data sculpture
represents mainstream media coverage of Net Neutrality over 15 months, during the
debate over the FCC's classification of broadband services. Each transparent pane
shows a slice in time, allowing users to physically move and look through the timeline.
The topics cutting through the panes show how attention shifted between aspects of the
debate over time.

404. Mixed-Mode Systems


in Disaster Response

Ethan Zuckerman and Willow Brugh

405. NetStories

Ethan Zuckerman, Heather Craig, Adrienne Debigare and Dalia Othman

Pure networks and pure hierarchies both have distinct strengths and weaknesses.
These become glaringly apparent during disaster response. By combining these
modes, their strengths (predictability, accountability, appropriateness, adaptability) can
be optimized, and their weaknesses (fragility, inadequate resources) can be
compensated for. Bridging these two worlds is not merely a technical challenge, but
also a social issue.

Recent years have witnessed a surge in online digital storytelling tools, enabling users
to more easily create engaging multimedia narratives. Increasing Internet access and
powerful in-browser functionality have laid the foundation for the proliferation of new
online storytelling technologies, ranging from tools for creating interactive online videos
to tools for data visualization. While these tools may contribute to diversification of
online storytelling capacity, sifting through tools and understanding their respective
limitations and affordances poses a challenge to storytellers. The NetStories research
initiative explores emergent online storytelling tools and strategies through a
combination of analyzing tools, facilitating story-hack days, and creating an online
database of storytelling tools.

406. NewsPix

Catherine D'Ignazio, Ethan Zuckerman, Matthew Carroll, Emerson College


Engagement Lab and Jay Vachon
NewsPix is a simple news-engagement application that helps users encounter breaking
news in the form of high-impact photos. It is currently a Chrome browser extension
(mobile app to come) that is customizable for small and large news organizations.
Currently, when users open a new, blank page in Chrome, they get a new tab with tiles
that show recently visited pages. NewsPix replaces that view with a high-quality picture
from a news site. Users interested in more information about the photo can click
through to the news site. News organizations can upload photos ranging from breaking
news to historic sporting events, with photos changing every time a new tab is clicked.

407. NGO2.0

Jing Wang, Wang Yu


NGO2.0 is a project grown out of the work of MIT's New Media Action Lab. The goal of
NGO2.0 is to strengthen the digital and social media literacy of Chinese grassroots
NGOs. Since 2009, the project has established collaborative relationships with IT
corporations, universities, and city-based software developers' communities to
advocate the development of a new brand of public interest sector that utilizes new
media and nonprofit technology to build a better society. NGO2.0 addresses three
major need categories of grassroots NGOs: communication, resources, and
technology. Within each category, NGO2.0 developed and implemented online and
offline projects. These include: Web 2.0 training workshops, Web 2.0 toolbox, a
crowdsourced philanthropy map, news stories and videos for NGOs, crowd funding
project design, NGO-CSR Partnership Forum, database of Chinese NGOs, and online
survey of Chinese NGOs' Internet usage.

MIT Media Lab

October 2015

Page 83

408. Open Gender Tracker

Irene Ros, Adam Hyland, J. Nathan Matias and Ethan Zuckerman


Open Gender Tracker is a suite of open source tools and APIs that make it easy for
newsrooms and media monitors to collect metrics and gain a better understanding of
gender diversity in their publications and audiences. This project has been created in
partnership with Irene Ros of Bocoup, with funding from the Knight Foundation.

409. Open Water Project

Adrienne Debigare, Ethan Zuckerman, Heather Craig, Catherine D'Ignazio, Don


Blair and Public Lab Community
The Open Water Project aims to develop and curate a set of low-cost, open source
tools enabling communities everywhere to collect, interpret, and share their water
quality data. Traditional water monitoring uses expensive, proprietary technology,
severely limiting the scope and accessibility of water quality data. Homeowners
interested in testing well water, watershed managers concerned about fish migration
and health, and other groups could benefit from an open source, inexpensive,
accessible approach to water quality monitoring. We're developing low-cost, open
source hardware devices that will measure some of the most common water quality
parameters, using designs that makes it possible for anyone to build, modify, and
deploy water quality sensors in their own neighborhood.

410. Out for Change:


Transformative Media
Organizing Project

Sasha Costanza-Chock, Becky Hurwitz, Heather Craig, Royal Morris, with support
from Rahul Bhargava, Ed Platt, Yu Wang

411. PageOneX

Ethan Zuckerman, Edward Platt, Rahul Bhargava and Pablo Rey Mazon

The Out for Change Transformative Media Organizing Project (OCTOP) links LGBTQ,
Two-Spirit, and allied media makers, online organizers, and tech-activists across the
United States. In 2013-2014, we are conducting a strengths/needs assessment of the
media and organizing capacity of the movement, as well as offering a series of
workshops and skillshares around transmedia organizing. The project is guided by a
core group of project partners and advisers who work with LGBTQ and Two-Spirit folks.
The project is supported by faculty and staff at the MIT Center for Civic Media,
Research Action Design and by the Ford Foundation's Advancing LGBT Rights
Initiative.

Newspaper front pages are a key source of data about our media ecology. Newsrooms
spend massive time and effort deciding what stories make it to the front page.
PageOneX makes coding and visualizing newspaper front page content much easier,
democratizing access to newspaper attention data. Communication researchers have
analyzed newspaper front pages for decades, using slow, laborious methods.
PageOneX simplifies, digitizes, and distributes the process across the net and makes it
available for researchers, citizens, and activists.

412. Promise Tracker

Ethan Zuckerman, Rahul Bhargava, Joy Buolamwini and Emilie Reiser


After an election, how can citizens hold leaders accountable for promises made during
the campaign season? Promise Tracker is a mobile phone-based data collection
system that enables communities to collect information on issues they consider
priorities and monitor the performance of their local governments. Through an
easy-to-use web platform, citizens can visualize aggregated data and use it to advocate
for change with local government, institutions, the press, and fellow community
members. We are currently piloting the project in Brazil and the United States, and
prototyping sensor integration to allow citizens to collect a range of environmental data.
Alumni Contributors: Alexis Hope and Jude Mwenda Ntabathia

Page 84

October 2015

MIT Media Lab

413. Readersourcing

Ethan Zuckerman, Rahul Bhargava, Nathan Matias and Sophie Diehl


The Passing On project uses data from 20 years of New York Times stories about
society's heroes, leaders, and visionaries to "Reader-source" improvements to
Wikipedia. As readers explore compelling stories about notable women, content is
generated to create new content and inspire the public to contribute those stories to
Wikipedia.

414. Scanner Grabber

Tal Achituv
Scanner Grabber is a digital police scanner that enables reporters to record, playback,
and export audio, as well as archive public safety radio (scanner) conversations. Like a
TiVo for scanners, it's an update on technology that has been stuck in the last century.
It's a great tool for newsrooms. For instance, a problem for reporters is missing the
beginning of an important police incident because they have stepped away from their
desk at the wrong time. Scanner Grabber solves this because conversations can be
played back. Also, snippets of exciting audio, for instance a police chase, can be
exported and embedded online. Reporters can listen to files while writing stories, or
listen to older conversations to get a more nuanced grasp of police practices or
long-term trouble spots. Editors and reporters can use the tool for collaborating, or
crowdsourcing/public collaboration.

415. Student Legal


Services for
Innovation

Ethan Zuckerman and J. Nathan Matias

416. Terra Incognita: 1000


Cities of the World

Catherine D'Ignazio, Ethan Zuckerman and Rahul Bhargava

417. The Babbling Brook

Catherine D'Ignazio and Ethan Zuckerman

Should students be prosecuted for innovative projects? In December 2014, four


undergraduates associated with the Media Lab were subpoenaed by the New Jersey
Attorney General after winning a programming competition with a bitcoin-related proof
of concept. We worked with MIT administration and the Electronic Frontier Foundation
to support the students and establish legal support for informal innovation. In
September 2015, MIT announced the creation of a new clinic for business and
cyberlaw.

Terra Incognita is a global news game and recommendation system. Terra Incognita
helps you discover interesting news and personal connections to cities that you haven't
read about. Whereas many recommendation systems connect you on the basis of
"similarity," Terra Incognita connects you to information on the basis of "serendipity."
Each time you open the application, Terra Incognita shows you a city that you have not
yet read about and gives you options for reading about it. Chelyabinsk (Russia),
Hiroshima (Japan), Hagta (Guam), and Dhaka (Bangladesh) are a few of the places
where you might end up.

The Babbling Brook is an unnamed neighborhood creek in Waltham, MA, that winds its
way to the Charles River. With the help of networked sensors and real-time processing,
the brook constantly tweets about the status of its water quality, including thoughts and
bad jokes about its own environmental and ontological condition. Currently, the
Babbling Brook senses temperature and depth and cross-references that information
with real-time weather data to come up with extremely bad comedy. Thanks to Brian
Mayton, the Responsive Environments group, and Tidmarsh Farms Living Observatory
for their support.

418. The Effect of


Gratitude Online

MIT Media Lab

J Nathan Matias, Erhardt Graeff, Emily Harburg


What is the effect of receiving appreciation or public recognition for your work? In this
cross-language, cross-cultural study, we test if receiving formal appreciation on
Wikipedia affects the quality of a contributor's subsequent edits on Wikipedia. Our initial
report of over 600,000 messages of appreciation on the English Wikipedia differentiates
different groups of contributors for whom appreciation has a significant effect.

October 2015

Page 85

419. VoIP Drupal

Leo Burd
VoIP Drupal is an innovative framework that brings the power of voice and
Internet-telephony to Drupal sites. It can be used to build hybrid applications that
combine regular touchtone phones, web, SMS, Twitter, IM, and other communication
tools in a variety of ways, facilitating community outreach and providing an online
presence to those who are illiterate or do not have regular access to computers. VoIP
Drupal will change the way you interact with Drupal, your phone, and the web.

420. Vojo.co

Alex Goncalves, Denise Cheng, Ethan Zuckerman, Rahul Bhargava, Sasha


Costanza-Chock, Rebecca Hurwitz, Edward Platt, Rodrigo Davies and Rogelio
Lopez
Vojo.co is a hosted mobile blogging platform that makes it easy for people to share
content to the web from mobile phones via voice calls, SMS, or MMS. Our goal is to
make it easier for people in low-income communities to participate in the digital public
sphere. You don't need a smart phone or an app to post blog entries or digital stories to
Vojo; any phone will do. You don't even need Internet access: Vojo lets you create an
account via SMS and start posting right away. Vojo is powered by the VozMob Drupal
Distribution, a customized version of the popular free and open-source content
management system that is being developed through an ongoing codesign process by
day laborers, household workers, and a diverse team from the Institute of Popular
Education of Southern California (IDEPSCA).

421. What We Watch

Ethan Zuckerman, Rahul Bhargava and Edward Platt


More than a billion people a month visit YouTube to watch videos. Sometimes, those
billion people watch the same video. What We Watch is a browser for trending
YouTube videos. Some videos trend in a single country, and some find regional
audiences. Others spread across borders of language, culture, and nation to reach a
global audience. What We We watch lets us visualize and explore the connections
between countries based on their video viewing habits.

422. Whose Voices?


Twitter Citation in the
Media

Ethan Zuckerman, Nathan Matias, Diyang Tang

423. ZL Vortice

Daniel Paz de Araujo (UNICAMP), Ethan Zuckerman, Adeline Gabriela Silva Gil,
Hermes Renato Hildebrand (PUC-SP/UNICAMP) and Nelson Brissac Peixoto
(PUC-SP)

Mainstream media increasingly quote social media sources for breaking news. "Whose
Voices" tracks who's getting quoted across topics, showing just how citizen media
sources are influencing international news reporting.

This project is currently promoting a survey of data from the East Side (Zona Leste) of
the city of Sao Paulo, Brazil. The aim is to detect the landscape dynamics:
infrastructure and urban planning, critical landscapes, housing, productive territory,
recycling, and public space. The material will be made available on a digital platform,
accessible by computers and mobile devices: a tool specially developed to enable local
communities to disseminate productive and creative practices that occur in the area, as
well as to enable a greater participation in the formulation of public policies. ZL Vortice
is an instrument that will serve to strengthen social, productive, and cultural networks of
the region.

Page 86

October 2015

MIT Media Lab

You might also like