Media Lab: Projects - Fall 2015
Media Lab: Projects - Fall 2015
Media Lab
Projects | Fall 2015
Many of the MIT Media Lab research projects described in the following pages are conducted under the auspices of
sponsor-supported, interdisciplinary Media Lab centers, joint research programs, and initiatives. They are:
Autism & Communication Technology Initiative
The Autism & Communication Technology Initiative utilizes the unique features of the Media Lab to foster the development of
innovative technologies that can enhance and accelerate the pace of autism research and therapy. Researchers are especially
invested in creating technologies that promote communication and independent living by enabling non-autistic people to
understand the ways autistic people are trying to communicate; improving autistic people's ability to use receptive and
expressive language along with other means of functional, non-verbal expression; and providing telemetric support that reduces
reliance on caregivers' physical proximity, yet still enables enriching and natural connectivity as wanted and needed.
Center for Civic Media
Communities need information to make decisions and take action: to provide aid to neighbors in need, to purchase an
environmentally sustainable product and shun a wasteful one, to choose leaders on local and global scales. Communities are
also rich repositories of information and knowledge, and often develop their own innovative tools and practices for information
sharing. Existing systems to inform communities are changing rapidly, and new ecosystems are emerging where old distinctions
like writer/audience and journalist/amateur have collapsed. The Civic Media group is a partnership between the MIT Media Lab
and Comparative Media Studies at MIT. Together, we work to understand these new ecosystems and to build tools and systems
that help communities collect and share information and connect that information to action. We work closely with communities to
understand their needs and strengths, and to develop useful tools together using collaborative design principles. We particularly
focus on tools that can help amplify the voices of communities often excluded from the digital public sphere and connect them
with new audiences, as well as on systems that help us understand media ecologies, augment civic participation, and foster
digital inclusion.
Center for Extreme Bionics
Half of the world's population currently suffers from some form of physical or neurological disability. At some point in our lives, it
is all too likely that a family member or friend will be struck by a limiting or incapacitating condition, from dementia, to the loss of
a limb, to a debilitating disease such as Parkinson's. Today we acknowledge and even "accept" serious physical and mental
impairments as inherent to the human condition. But must these conditions be accepted as "normal"? What if, instead, through
the invention and deployment of novel technologies, we could control biological processes within the body in order to repair or
even eradicate them? What if there were no such thing as human disability? These questions drive the work of Media Lab
faculty members Hugh Herr and Ed Boyden, and MIT Institute Professor Robert Langer, and what has led them and the MIT
Media Lab to propose the establishment of a new Center for Extreme Bionics. This dynamic new interdisciplinary organization
will draw on the existing strengths of research in synthetic neurobiology, biomechatronics, and biomaterials, combined with
enhanced capabilities for design development and prototyping.
Center for Mobile Learning
The Center for Mobile Learning invents and studies new mobile technologies to promote learning anywhere anytime for anyone.
The Center focuses on mobile tools that empower learners to think creatively, collaborate broadly, and develop applications that
are useful to themselves and others around them. The Center's work covers location-aware learning applications, mobile
sensing and data collection, augmented reality gaming, and other educational uses of mobile technologies. The Center s first
major activity will focus on App Inventor, a programming system that makes it easy for learners to create mobile apps by fitting
together puzzle piece-shaped blocks in a web browser.
Center for Terrestrial Sensing
The deeply symbiotic relationship between our planet and ourselves is increasingly mediated by technology. Ubiquitous,
networked sensing has provided the earth with an increasingly sophisticated electronic nervous system. How we connect with,
interpret, visualize, and use the geoscience information shared and gathered is a deep challenge, with transformational
potential. The Center for Terrestrial Sensing aims to address this challenge.
The most current information about our research is available on the MIT Media Lab Web site, at
http://www.media.mit.edu/research/.
October 2015
Page i
Page ii
October 2015
The Lab has also organized the following special interest groups (SIGs), which deal with particular subject areas.
Advancing Wellbeing
In contributing to the digital revolution, the Media Lab helped fuel a society where increasing numbers of people are obese,
sedentary, and glued to screens. Our online culture has promoted meaningfulness in terms of online fame and numbers of
viewers, and converted time previously spent building face-to-face relationships into interactions online with people who may not
be who they say they are. What we have helped to create, willingly or not, often diminishes the social-emotional relationships
and activities that promote physical, mental, and social health. Moreover, our workplace culture escalates stress, provides
unlimited caffeine, distributes nutrition-free food, holds back-to-back sedentary meetings, and encourages overnight hackathons
and unhealthy sleep behavior. Without being dystopian about technology, this effort aims to spawn a series of projects that
leverage the many talents and strengths in the Media Lab in order to reshape technology and our workplace to enhance health
and wellbeing.
CE 2.0
Most of us are awash in consumer electronics (CE) devices: from cellphones, to TVs, to dishwashers. They provide us with
information, entertainment, and communications, and assist us in accomplishing our daily tasks. Unfortunately, most are not as
helpful as they could and should be; for the most part, they are dumb, unaware of us or our situations, and often difficult to use.
In addition, most CE devices cannot communicate with our other devices, even when such communication and collaboration
would be of great help. The Consumer Electronics 2.0 initiative (CE 2.0) is a collaboration between the Media Lab and its
sponsor companies to formulate the principles for a new generation of consumer electronics that are highly connected,
seamlessly interoperable, situation-aware, and radically simpler to use. Our goal is to show that as computing and
communication capability seep into more of our everyday devices, these devices do not have to become more confusing and
complex, but rather can become more intelligent in a cooperative and user-friendly way.
City Science
The world is experiencing a period of extreme urbanization. In China alone, 300 million rural inhabitants will move to urban
areas over the next 15 years. This will require building an infrastructure equivalent to the one housing the entire population of
the United States in a matter of a few decades. In the future, cities will account for nearly 90 percent of global population growth,
80 percent of wealth creation, and 60 percent of total energy consumption. Developing better strategies for the creation of new
cities, is therefore, a global imperative. Our need to improve our understanding of cities, however, is pressed not only by the
social relevance of urban environments, but also by the availability of new strategies for city-scale interventions that are enabled
by emerging technologies. Leveraging advances in data analysis, sensor technologies, and urban experiments, City Science will
provide new insights into creating a data-driven approach to urban design and planning. To build the cities that the world needs,
we need a scientific understanding of cities that considers our built environments and the people who inhabit them. Our future
cities will desperately need such understanding.
Connection Science
As more of our personal and public lives become infused and shaped by data from sensors and computing devices, the lines
between the digital and the physical have become increasingly blurred. New possibilities arise, some promising, others alarming,
but both with an inexorable momentum that is supplanting time honored practices and institutions. MIT Connection Science is a
cross-disciplinary effort drawing on the strengths of faculty, departments and researchers across the Institute, to decode the
meaning of this dynamic, at times chaotic, new environment. The initiative will help business executives, investors,
entrepreneurs and policymakers capitalize on the multitude of opportunities unlocked by the new hyperconnected world we live
in.
Ethics
The Ethics Initiative works to foster multidisciplinary program designs and critical conversations around ethics, wellbeing, and
human flourishing. The initiative seeks to create collaborative platforms for scientists, engineers, artists, and policy makers to
optimize designing for humanity....
Future of News
The Future of News is designing, testing, and making creative tools that help newsrooms adapt in a time of rapid change. As
traditional news models erode, we need new models and techniques to reach a world hungry for news, but whose reading and
viewing habits are increasingly splintered. Newsrooms need to create new storytelling techniques, recognizing that the way
users consume news continues to change. Readers and viewers expect personalized content, deeper context, and information
that enables them to influence and change their world. At the same time, newsrooms are seeking new ways to extend their
influence, to amplify their message by navigating new paths for readers and viewers, and to find new methods of delivery. To
tackle these problems, we will work with Media Lab students and the broader MIT community to identify promising projects and
find newsrooms across the country interested in beta-testing those projects.
October 2015
Page iii
Future Storytelling
The Future Storytelling working group at the Media Lab is rethinking storytelling for the 21st century. The group takes a new and
dynamic approach to how we tell our stories, creating new methods, technologies, and learning programs that recognize and
respond to the changing communications landscape. The group builds on the Media Lab's more than 25 years of experience in
developing society-changing technologies for human expression and interactivity. By applying leading-edge technologies to
make stories more interactive, improvisational, and social, researchers are working to transform audiences into active
participants in the storytelling process, bridging the real and virtual worlds, and allowing everyone to make and share their own
unique stories. Research also explores ways to revolutionize imaging and display technologies, including developing
next-generation cameras and programmable studios, making movie production more versatile and economic.
Media Lab Learning
The Media Lab Learning initiative explores new approaches to learning. We study learning across many dimensions, ranging
from neurons to nations, from early childhood to lifelong scholarship, and from human creativity to machine intelligence. The
program is built around a cohort of learning innovators from across the diverse Media Lab groups. We are designing tools and
technologies that change how, when, where, and what we learn; and developing new solutions to enable and enhance learning
everywhere, including at the Media Lab itself. In addition to creating tools and models, the initiative provides non-profit and
for-profit mechanisms to help promising innovations to scale.
Open Agriculture (OpenAG)
The MIT Media Lab Open Agriculture (OpenAG) initiative is on a mission to create healthier, more engaging, and more inventive
future food systems. We believe the precursor to a healthier and more sustainable food system will be the creation of an
open-source ecosystem of food technologies that enable and promote transparency, networked experimentation, education, and
hyper-local production. The OpenAG Initiative brings together partners from industry, government, and academia to develop an
open source "food tech"? research collective for the creation of the global agricultural hardware, software, and data commons.
Together we will build collaborative tools and open technology platforms for the exploration of future food systems.
The Pixel Factory: Data Visualization SIG
The rise of computational methods has generated a new natural resource: data. The Pixel Factory focuses on the creation of
data visualization resources and tools in collaboration with corporate members. The goals of The Pixel Factory are twofold. First,
we will create software resources that will facilitate the development of online data visualization platforms. More importantly, we
will create these resources as a means to learn, as the most valuable outcome of the Pixel Factory will not be the software
resources produced as incredible as these could be but the generation of people it will imbue with the capacity to create these
resources.
Ultimate Media
Visual media has irretrievably lost its lock on the audience but has gained unprecedented opportunity to evolve the platform by
which it is communicated and to become integrated with the social and data worlds in which we live. Ultimate Media is creating a
platform for the invention, creation, and realization of new ways to explore and participate in the media universe. We apply
extremes of access, processing, and interaction to build new media experiences and explorations that permit instant video
blogging, exploration of the universe of news and narrative entertainment, and physical interfaces that allow people to
collaborate around media.
Page iv
October 2015
1
1
1
1
1
1
2
2
2
2
2
3
3
3
3
3
4
4
4
4
4
4
5
5
5
5
5
6
6
6
6
October 2015
10
11
11
11
11
12
12
Page v
54.
55.
56.
57.
58.
59.
60.
61.
Human Walking Model Predicts Joint Mechanics, Electromyography, and Mechanical Economy ....................................
Load-Bearing Exoskeleton for Augmentation of Human Running .....................................................................................
Neural Interface Technology for Advanced Prosthetic Limbs ............................................................................................
Powered Ankle-Foot Prosthesis ........................................................................................................................................
Sensor-Fusions for an EMG Controlled Robotic Prosthesis ..............................................................................................
Tethered Robotic System for Understanding Human Movements ..................................................................................
Variable-Impedance Prosthetic (VIPr) Socket Design .......................................................................................................
Volitional Control of a Powered Ankle-Foot Prosthesis .....................................................................................................
12
12
13
13
13
13
14
14
14
14
14
15
15
15
15
15
15
16
16
16
16
16
bioLogic .............................................................................................................................................................................
inFORM .............................................................................................................................................................................
jamSheets: Interacting with Thin Stiffness-Changing Material ..........................................................................................
LineFORM .........................................................................................................................................................................
MirrorFugue .......................................................................................................................................................................
MMODM: Massively Multiplayer Online Drum Machine ....................................................................................................
Pneumatic Shape-Changing Interfaces .............................................................................................................................
Radical Atoms ....................................................................................................................................................................
TRANSFORM ....................................................................................................................................................................
TRANSFORM: Adaptive and Dynamic Furniture ...............................................................................................................
17
17
17
17
18
18
18
18
18
18
19
19
19
19
19
19
Microculture .......................................................................................................................................................................
Storyboards .......................................................................................................................................................................
The Dog Programming Language .....................................................................................................................................
Wildflower Montessori ........................................................................................................................................................
You Are Here .....................................................................................................................................................................
20
20
20
20
20
Page vi
October 2015
21
21
21
21
21
21
22
104.
105.
106.
107.
108.
109.
110.
111.
112.
113.
114.
115.
116.
117.
22
22
22
22
22
22
22
23
23
23
23
24
24
24
24
24
25
25
25
25
25
25
26
26
26
26
26
27
27
27
27
27
28
28
28
28
29
29
29
29
30
30
30
30
31
31
31
31
October 2015
32
32
32
32
32
32
33
Page vii
159.
160.
161.
162.
163.
164.
165.
166.
167.
168.
169.
170.
171.
172.
173.
174.
33
33
33
33
33
33
34
34
34
34
34
35
35
35
35
35
36
36
36
36
37
37
37
37
37
38
38
38
38
39
39
39
39
40
40
40
40
40
41
41
41
41
41
42
42
Page viii
October 2015
42
42
43
43
43
43
43
44
44
44
44
44
44
45
45
45
45
46
46
46
46
47
47
47
47
47
48
48
48
48
48
49
49
49
49
49
50
50
50
October 2015
50
51
51
51
51
52
52
52
52
53
53
53
53
53
54
54
54
54
54
55
55
55
55
55
Page ix
267.
268.
269.
270.
271.
272.
273.
274.
275.
276.
277.
Predicting Students' Wellbeing from Physiology, Phone, Mobility, and Behavioral Data ..................................................
Real-Time Assessment of Suicidal Thoughts and Behaviors ............................................................................................
SmileTracker ......................................................................................................................................................................
SNAPSHOT Expose ..........................................................................................................................................................
SNAPSHOT Study .............................................................................................................................................................
StoryScape ........................................................................................................................................................................
The Challenge ...................................................................................................................................................................
Tributary .............................................................................................................................................................................
Unlocking Sleep .................................................................................................................................................................
Valinor: Mathematical Models to Understand and Predict Self-Harm ................................................................................
Wavelet-Based Motion Artifact Removal for Electrodermal Activity ..................................................................................
56
56
56
56
57
57
57
57
57
58
58
Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars? ....................................................
Crowdsourcing a Manhunt .................................................................................................................................................
Crowdsourcing Under Attack .............................................................................................................................................
Honest Crowds ..................................................................................................................................................................
58
58
58
59
Page x
6D Display .........................................................................................................................................................................
A Switchable Light-Field Camera ......................................................................................................................................
Bokode: Imperceptible Visual Tags for Camera-Based Interaction from a Distance .........................................................
CATRA: Mapping of Cataract Opacities Through an Interactive Approach .......................................................................
Coded Computational Photography ...................................................................................................................................
Coded Focal Stack Photography .......................................................................................................................................
Compressive Light-Field Camera: Next Generation in 3D Photography ...........................................................................
Eyeglasses-Free Displays .................................................................................................................................................
Imaging Behind Diffusive Layers .......................................................................................................................................
Imaging through Scattering Media Using Femtophotography ...........................................................................................
Inverse Problems in Time-of-Flight Imaging ......................................................................................................................
Layered 3D: Glasses-Free 3D Printing ..............................................................................................................................
LensChat: Sharing Photos with Strangers .........................................................................................................................
Looking Around Corners ....................................................................................................................................................
NETRA: Smartphone Add-On for Eye Tests .....................................................................................................................
New Methods in Time-of-Flight Imaging ............................................................................................................................
PhotoCloud: Personal to Shared Moments with Angled Graphs of Pictures .....................................................................
Polarization Fields: Glasses-Free 3DTV ............................................................................................................................
Portable Retinal Imaging ...................................................................................................................................................
Reflectance Acquisition Using Ultrafast Imaging ...............................................................................................................
Second Skin: Motion Capture with Actuated Feedback for Motor Learning ......................................................................
Shield Field Imaging ..........................................................................................................................................................
Single Lens Off-Chip Cellphone Microscopy .....................................................................................................................
Single-Photon Sensitive Ultrafast Imaging ........................................................................................................................
Skin Perfusion Photography ..............................................................................................................................................
Slow Display ......................................................................................................................................................................
SpeckleSense ....................................................................................................................................................................
SpecTrans: Classification of Transparent Materials and Interactions ................................................................................
StreetScore ........................................................................................................................................................................
Tensor Displays: High-Quality Glasses-Free 3D TV .........................................................................................................
Theory Unifying Ray and Wavefront Lightfield Propagation ..............................................................................................
Time-of-Flight Microwave Camera .....................................................................................................................................
Towards In-Vivo Biopsy .....................................................................................................................................................
Trillion Frames Per Second Camera .................................................................................................................................
Ultrasound Tomography ....................................................................................................................................................
Unbounded High Dynamic Range Photography Using a Modulo Camera ........................................................................
Vision on Tap .....................................................................................................................................................................
VisionBlocks ......................................................................................................................................................................
Visual Lifelogging ...............................................................................................................................................................
October 2015
59
59
59
60
60
60
60
60
61
61
61
61
61
61
62
62
62
62
63
63
63
63
63
63
64
64
64
64
64
65
65
65
65
65
66
66
66
66
66
67
67
67
67
68
68
68
68
68
69
69
69
69
69
69
69
70
70
70
70
71
71
71
71
71
72
72
72
72
73
73
73
73
Activ8 .................................................................................................................................................................................
Amphibian: Terrestrial Scuba Diving Using Virtual Reality ................................................................................................
Kaan: Wristband to Educate Deaf Children about Social Norms .......................................................................................
Meta-Physical-Space VR ...................................................................................................................................................
MugShots ...........................................................................................................................................................................
NailO ..................................................................................................................................................................................
OnTheGo ...........................................................................................................................................................................
Spotz ..................................................................................................................................................................................
Variable Reality: Interaction with the Virtual Book .............................................................................................................
74
74
74
74
74
75
75
75
75
October 2015
75
76
76
76
76
76
76
77
77
77
77
Page xi
374.
375.
376.
377.
378.
379.
380.
381.
382.
383.
Dice++ ...............................................................................................................................................................................
EyeWire .............................................................................................................................................................................
GAMR ................................................................................................................................................................................
Homeostasis ......................................................................................................................................................................
MicroPsi: An Architecture for Motivated Cognition ............................................................................................................
radiO_o ..............................................................................................................................................................................
Sneak: A Hybrid Digital-Physical Tabletop Game .............................................................................................................
Soft Exchange: Interaction Design with Biological Interfaces ............................................................................................
Storyboards .......................................................................................................................................................................
Troxes ................................................................................................................................................................................
77
78
78
78
78
78
78
79
79
79
Page xii
October 2015
79
79
80
80
80
80
80
81
81
81
81
81
81
82
82
82
82
82
82
83
83
83
83
83
84
84
84
84
84
85
85
85
85
85
85
86
86
86
86
86
1.
2.
3D Telepresence
Chair
Daniel Novy
4K Comics
3.
4.
Aerial Volumetric
Light-Field Display
V. Michael Bove, Daniel Novy and Henry Holtzman (Samsung NExD Lab)
Ambi-Blinds
5.
BigBarChart
6.
Bottles&Boxes:
Packaging with
Sensors
October 2015
Page 1
7.
Calliope
8.
Consumer
Holo-Video
V. Michael Bove Jr., Bianca Datta, Ermal Dreshaj and Sundeep Jolly
The goal of this project, building upon work begun by Stephen Benton and the Spatial
Imaging group, is to create an inexpensive desktop monitor for a PC or game console
that displays holographic video images in real time, suitable for entertainment,
engineering, or medical imaging. To date, we have demonstrated the fast rendering of
holo-video images (including stereographic images, which, unlike ordinary stereograms,
have focusing consistent with depth information) from OpenGL databases on
off-the-shelf PC graphics cards; current research addresses new optoelectronic
architectures to reduce the size and manufacturing cost of the display system.
Alumni Contributors: James D. Barabas, Daniel Smalley and Quinn Y J Smithwick
9.
Dressed in Data
10.
DUSK
11.
Page 2
EmotiveModeler: An
Emotive Form Design
CAD Tool
October 2015
12.
13.
Everything Tells a
Story
Guided-Wave Light
Modulator
Following upon work begun in the Graspables project, we are exploring what happens
when a wide range of everyday consumer products can sense, interpret into human
terms (using pattern recognition methods), and retain memories, such that users can
construct a narrative with the aid of the recollections of the "diaries" of their sporting
equipment, luggage, furniture, toys, and other items with which they interact.
14.
Holoshop
15.
Infinity-by-Nine
16.
17.
ListenTree:
Audio-Haptic Display
in the Natural
Environment
Live Objects
A Live Object is a small device that can stream media content wirelessly to nearby
mobile devices without an Internet connection. Live Objects are associated with real
objects in the environment, such as an art piece in a museum, a statue in a public
October 2015
Page 3
space, or a product in a store. Users exploring a space can discover nearby Live
Objects and view content associated with them, as well as leave comments for future
visitors. The mobile device retains a record of the media viewed (and links to additional
content), while the objects can retain a record of who viewed them. Future extensions
will look into making the system more social, exploring game applications such as
media scavenger hunts built on top of the platform, and incorporating other types of
media such as live and historical data from sensors associated with the objects.
18.
Narratarium
19.
20.
Networked
Playscapes: Dig Deep
Pillow-Talk
Networked Playscapes re-imagine outdoor play by merging the flexibility and fantastical
of the digital world with the tangible, sensorial properties of physical play to create
hybrid interactions for the urban environment. Dig Deep takes the classic sandbox
found in children's playgrounds and merges it with the common fantasy of "digging your
way to the other side of the world" to create a networked interaction in tune with child
cosmogony.
Pillow-Talk is the first of a series of objects designed to aid creative endeavors through
the unobtrusive acquisition of unconscious, self-generated content to permit reflexive
self-knowledge. Composed of a seamless recording device embedded in a pillow, and
a playback and visualization system in a jar, Pillow-Talk crystallizes that which we
normally forget. This allows users to capture their dreams in a less mediated way,
aiding recollection by priming the experience and providing no distraction for recall and
capture through embodied interaction.
21.
22.
23.
Programmable
Synthetic
Hallucinations
ShAir is a platform for instantly and easily creating local content-shareable spaces
without requiring an Internet connection or location information. ShAir-enabled devices
can opportunistically communicate with other mobile devices and optional pervasive
storage devices such as WiFi SD cards whenever they enter radio range of one
another. Digital content can hop through devices in the background without user
intervention. Applications that can be built on top of the platform include ad-hoc
photo/video/music sharing and distribution, opportunistic social networking and games,
digital business card exchange during meetings and conferences, and local news
article-sharing on trains and buses.
Page 4
October 2015
24.
Smell Narratives
25.
SurroundVision
26.
V. Michael Bove, Laura Perovich, Don Blair and Sara Wiley (Northeastern
University)
Two of the most important traits of environmental hazards today are their invisibility and
the fact that they are experienced by communities, not just individuals. Yet we don't
have a good way to make hazards like chemical pollution visible and intuitive. The
thermal fishing bob seeks to visceralize rather than simply visualize data by creating a
data experience that makes water pollution data present. The bob measures water
temperature and displays that data by changing color in real time. Data is also logged
to be physically displayed elsewhere and can be further recorded using long-exposure
photos. Making environmental data experiential and interactive will help both
communities and researchers better understand pollution and its implications.
27.
Optogenetics and
Synthetic Biology
Tools
October 2015
Page 5
28.
29.
Prototype Strategies
for Treating Brain
Disorders
Karen Buch, Stephanie Ku, Changyang Linghu, Giovanni Talei Franzesi, Christian
Wentz, Nir Grossman, Harbaljit Sohal, Bara Badwan
Daniel Oran, Pablo Valdes, Alex Wissner-Gross, Fei Chen, Linyi Gao, Sam
Rodriques, Paul Tillberg, Asmamaw "Oz" Wassie, Shahar Alon, Jae-Byum Chang,
Ru Wang, Yongxin Zhao, Manos Karagiannis, Adam Marblestone, Alexander
Clifton, Jeremy Wohlwend, Andrew Payn
New technologies for recording neural activity, controlling neural activity, or building
brain circuits, may be capable some day of serving in therapeutic roles for improving
the health of human patients: enabling the restoration of lost senses, the control of
aberrant or pathological neural dynamics, and the augmentation of neural circuit
computation, through prosthetic means. High throughput molecular and physiological
analysis methods may also open up new diagnostic possibilities. We are assessing,
often in collaborations with other groups, the translational possibilities opened up by our
technologies, exploring the safety and efficacy of our technologies in multiple animal
models, in order to discover potential applications of our tools to various clinically
relevant scenarios. New kinds of "brain co-processor" may be possible which can work
efficaciously with the brain to augment its computational abilities, e.g., in the context of
cognitive, emotional, sensory, or motor disability.
Brain circuits are large, 3D structures. However, the building blocks proteins, signaling
complexes, synapses are organized with nanoscale precision. This presents a
fundamental tension in neuroscience: to understand a neural circuit, you might need to
map a large diversity of nanoscale building blocks, across an extended spatial expanse.
We are developing a new suite of tools that enable the mapping of the location and
identity of the molecular building blocks of the brain, so that comprehensive taxonomies
of cells, circuits, and computations might someday become possible, even in entire
brains. One of the technologies we are developing enables large, 3D objects to be
imaged with nanoscale precision, by physically expanding the sample a tool we call
expansion microscopy (ExM). We are working to improve expansion microscopy
further, and are working, often in interdisciplinary collaborations, on a suite of new
labeling and analysis techniques to enable multiplexed readout.
30.
31.
Page 6
Understanding
Normal and
Pathological Brain
Computations
Brian Allen, Ho-Jun Suk, Jay Yu, Limor Freifeld, Erica (Eunjung) Jung, Annabelle
Singer, Demian Park, Ingrid Van Welie, Bettina Arkhurst, Eunice Wu
We are providing our tools to the community, and also using them within our lab, to
analyze how specific brain mechanisms (molecular, cellular, circuit-level) give rise to
behaviors and pathological states. These studies may yield fundamental insights into
how best to go about treating brain disorders.
October 2015
32.
33.
34.
AIDA: Affective
Intelligent Driving
Agent
Animal-Robot
Interaction
Cloud-HRI
Drivers spend a significant amount of time multi-tasking while they are behind the
wheel. These dangerous behaviors, particularly texting while driving, can lead to
distractions and ultimately to accidents. Many in-car interfaces designed to address this
issue still neither take a proactive role to assist the driver nor leverage aspects of the
driver's daily life to make the driving experience more seamless. In collaboration with
Volkswagen/Audi and the SENSEable City Lab, we are developing AIDA (Affective
Intelligent Driving Agent), a robotic driver-vehicle interface that acts as a sociable
partner. AIDA elicits facial expressions and strong non-verbal cues for engaging social
interaction with the driver. AIDA also leverages the driver's mobile device as its face,
which promotes safety, offers proactive driver support, and fosters deeper
personalization to the driver.
Like people, dogs and cats live among technologies that affect their lives. Yet little of
this technology has been designed with pets in mind. We are developing systems that
interact intelligently with animals to entertain, exercise, and empower them. Currently,
we are developing a laser-chasing game, in which dogs or cats are tracked by a
ceiling-mounted webcam, and a computer-controlled laser moves with knowledge of the
pet's position and movement. Machine learning will be applied to optimize the specific
laser strategy. We envision enabling owners to initiate and view the interaction remotely
through a web interface, providing stimulation and exercise to pets when the owners
are at work or otherwise cannot be present.
Imagine opening your eyes and being awake for only half an hour at a time. This is the
life that robots traditionally live. This is due to a number of factors, such as battery life
and wear on prototype joints. Roboticists have typically muddled though this challenge
by crafting handmade perception and planning models of the world, or by using
machine learning with synthetic and real-world data, but cloud-based robotics aims to
marry large distributed systems with machine learning techniques to understand how to
build robots that interpret the world in a richer way. This movement aims to build
large-scale machine learning algorithms that use experiences from large groups of
people, whether sourced from a large number of tabletop robots or a large number of
experiences with virtual agents. Large-scale robotics aims to change embodied AI as it
changed non-embodied AI.
35.
DragonBot: Android
Phone Robots for
Long-Term HRI
October 2015
Page 7
36.
Global Literacy
Tablets
Cynthia Breazeal, David Nunez, Tinsley Galyean, Maryanne Wolf (Tufts), and
Robin Morris (GSU)
We are developing a system of early literacy apps, games, toys, and robots that will
triage how children are learning, diagnose literacy deficits, and deploy dosages of
content to encourage app play using a mentoring algorithm that recommends an
appropriate activity given a child's progress. Currently, over 200 Android-based tablets
have been sent to children around the world; these devices are instrumented to provide
a very detailed picture of how kids are using these technologies. We are using this big
data to discover usage and learning models that will inform future educational
development.
37.
Huggable: A Social
Robot for Pediatric
Care
38.
LightSwarm
Charles Rose, Cynthia Breazeal, Palash Nandy, Hiram Moncivias and Kyubok Lee
LightSwarm is a platform for interaction between humans and swarm robots. Swarm
robots are implemented as augmented-reality agents that communicate and interact
through movements. While each robot is simple, the aggregate shows complex
behavior. In this way, LightSwarm invites one to think of the mind as a loose consensus
of a society of agents, which allows for more nuanced interactions.
39.
40.
Page 8
Mind-Theoretic
Planning for Robots
Mind-Theoretic Planning (MTP) is a technique for robots to plan in social domains. This
system takes into account probability distributions over the initial beliefs and goals of
people in the environment that are relevant to the task, and creates a prediction of how
they will rationally act on their beliefs to achieve their goals. The MTP system then
proceeds to create an action plan for the robot that simultaneously takes advantage of
the effects of anticipated actions of others and also avoids interfering with them.
To serve us well, robots and other agents must understand our needs and how to fulfill
them. To that end, our research develops robots that empower humans by interactively
learning from them. Interactive learning methods enable technically unskilled end-users
to designate correct behavior and communicate their task knowledge to improve a
robot's task performance. This research on interactive learning focuses on algorithms
that facilitate teaching by signals of approval and disapproval from a live human trainer.
We operationalize these feedback signals as numeric rewards within the
machine-learning framework of reinforcement learning. In comparison to the
complementary form of teaching by demonstration, this feedback-based teaching may
require less task expertise and place less cognitive load on the trainer. Envisioned
applications include human-robot collaboration and assistive robotic devices for
handicapped users, such as myolectrically controlled prosthetics.
October 2015
41.
Robotic Language
Learning
Companions
Cynthia Breazeal, Jacqueline Kory Westlund, Sooyeon Jeong, Paul Harris, Dave
DeSteno, and Leah Dickens
Young children learn language not through listening alone, but through active
communication with a social actor. Cultural immersion and context are also key in
long-term language development. We are developing robotic conversational partners
and hybrid physical/digital environments for language learning. For example, the robot
Sophie helped young children learn French through a food-sharing game. The game
was situated on a digital tablet embedded in a caf table. Sophie modeled how to order
food and as the child practiced the new vocabulary, the food was delivered via digital
assets onto the table's surface. A teacher or parent can observe and shape the
interaction remotely via a digital tablet interface to adjust the robot's conversation and
behavior to support the learner. More recently, we have been examining how social
nonverbal behaviors impact children's perceptions of the robot as an informant and
social companion.
Alumni Contributors: Natalie Anne Freed and Adam Michael Setapen
42.
Robotic Learning
Companions
43.
44.
SHARE:
Understanding and
Manipulating
Attention Using
Social Robots
Socially Assistive
Robotics: An NSF
Expedition in
Computing
Our mission is to develop the computational techniques that will enable the design,
implementation, and evaluation of "relational" robots, in order to encourage social,
emotional, and cognitive growth in children, including those with social or cognitive
deficits. Funding for the project comes from the NSF Expeditions in Computing
program. This expedition has the potential to substantially impact the effectiveness of
education and healthcare, and to enhance the lives of children and other groups that
require specialized support and intervention. In particular, the MIT effort is focusing on
developing second-language learning companions for pre-school aged children,
ultimately for ESL (English as a Second Language).
Alumni Contributors: Catherine Havasi and Brad Knox
October 2015
Page 9
45.
Cooper Perkins Inc., Fardad Faridi, Cynthia Breazeal, Jin Joo Lee, Luke Plummer,
IFRobots and Stacey Dyer
Tega is a new robot platform for long-term interactions with children. The robot
leverages smart phones to graphically display facial expressions. Smart phones are
also used for computational needs, including behavioral control, sensor processing, and
motor control to drive its five degrees of freedom. To withstand long-term continual use,
we have designed an efficient battery-powered system that can potentially run for up to
six hours before needing to be charged. We also designed for more robust and reliable
actuator movements so that the robot can express consistent and expressive behaviors
over long periods of time. Through its small size and furry exterior, the robot is
aesthetically designed for children. We aim to field test the robot's ability to work
reliably in out-of-lab environments and engage young children in educational activities.
Alumni Contributor: Kris Dos Santos
46.
TinkRBook:
Reinventing the
Reading Primer
47.
Page 10
Artificial
Gastrocnemius
October 2015
48.
Biomimetic Active
Prosthesis for
Above-Knee
Amputees
49.
Control of
Muscle-Actuated
Systems via
Electrical Stimulation
Hugh Herr
Motivated by applications in rehabilitation and robotics, we are developing
methodologies to control muscle-actuated systems via electrical stimulation. As a
demonstration of such potential, we are developing centimeter-scale robotic systems
that utilize muscle for actuation and glucose as a primary source of fuel. This is an
interesting control problem because muscles: a) are mechanical state-dependent
actuators; b) exhibit strong nonlinearities; and c) have slow time-varying properties due
to fatigue-recuperation, growth-atrophy, and damage-healing cycles. We are
investigating a variety of adaptive and robust control techniques to enable us to achieve
trajectory tracking, as well as mechanical power-output control under sustained
oscillatory conditions. To implement and test our algorithms, we developed an
experimental capability that allows us to characterize and control muscle in real time,
while imposing a wide variety of dynamical boundary conditions.
Alumni Contributor: Waleed A. Farahat
50.
51.
Dancing Control
System for Bionic
Ankle Prosthesis
Hugh Herr, Bevin Lin, Elliott Rouse, Nathan Villagaray-Carski and Robert
Emerson
Effect of a Powered
Ankle on Shock
Absorption and
Interfacial Pressure
Professional ballroom dancer Adrianne Haslet-Davis lost her natural ability to dance
when her left leg was amputated below the knee following the Boston Marathon
bombings in April 2013. Hugh Herr was introduced to Adrianne while meeting with
bombing survivors at Boston's Spaulding Rehabilitation Hospital. For Professor Herr,
this meeting generated a research challenge: build Adrianne a bionic ankle prosthesis,
and restore her ability to dance. The research team for this project spent some 200
days studying the biomechanics of dancing and designing the bionic technology based
on their investigations. The control system for Adrianne was implemented on a
customized BiOM bionic ankle prosthesis.
October 2015
Page 11
52.
FitSocket:
Measurement for
Attaching Objects to
People
53.
54.
FlexSEA: Flexible,
Scalable Electronics
Architecture for
Wearable Robotics
Applications
Human Walking
Model Predicts Joint
Mechanics,
Electromyography,
and Mechanical
Economy
This project aims to enable fast prototyping of a multi-axis and multi-joint active
prosthesis by developing a new modular electronics system. This system provides the
required hardware and software to do precise motion control, data acquisition, and
networking. Scalability is obtained by the use of a fast industrial communication
protocol between the modules, and by a standardization of the peripherals' interfaces: it
is possible to add functionalities to the system simply by plugging additional cards.
Hardware and software encapsulation is used to provide high-performance, real-time
control of the actuators while keeping the high-level algorithmic development and
prototyping simple, fast, and easy.
We are studying the mechanical behavior of leg muscles and tendons during human
walking in order to motivate the design of power-efficient robotic legs. The Endo-Herr
walking model uses only three actuators (leg muscles) to power locomotion. It uses
springs and clutches in place of other essential tendons and muscles to store energy
and transfer energy from one joint to another during walking. Since mechanical clutches
require much less energy than electric motors, this model can be used to design highly
efficient robotic legs and exoskeletons. Current work includes analysis of the model at
variable walking speeds and informing design specifications for a collaborative
"SuperFlex" exosuit project.
Alumni Contributor: Ken Endo
55.
Page 12
Load-Bearing
Exoskeleton for
Augmentation of
Human Running
October 2015
56.
57.
Neural Interface
Technology for
Advanced Prosthetic
Limbs
Powered Ankle-Foot
Prosthesis
Hugh Herr
Recent advances in artificial limbs have resulted in the provision of powered ankle and
knee function for lower extremity amputees and powered elbow, wrist, and finger joints
for upper extremity prostheses. Researchers still struggle, however, with how to provide
prosthesis users with full volitional and simultaneous control of the powered joints. This
project seeks to develop means to allow amputees to control their powered prostheses
by activating the peripheral nerves present in their residual limb. Such neural control
can be more natural than currently used myoelectric control, since the same functions
previously served by particular motor fascicles can be directed to the corresponding
prosthesis actuators for simultaneous joint control, as in normal limbs. Future plans
include the capability to electrically activate the sensory components of residual limb
nerves to provide amputees with tactile feedback and an awareness of joint position
from their prostheses.
The human ankle provides a significant amount of net positive work during the stance
period of walking, especially at moderate to fast walking speeds. Conversely,
conventional ankle-foot prostheses are completely passive during stance, and
consequently, cannot provide net positive work. Clinical studies indicate that transtibial
amputees using conventional prostheses experience many problems during
locomotion, including a high gait metabolism, a low gait speed, and gait asymmetry.
Researchers believe the main cause for the observed locomotion is due to the inability
of conventional prostheses to provide net positive work during stance. The objective of
this project is to develop a powered ankle-foot prosthesis that is capable of providing
net positive work during the stance period of walking. To this end, we are investigating
the mechanical design and control system architectures for the prosthesis. We are also
conducting a clinical evaluation of the proposed prosthesis on different amputee
participants.
Alumni Contributor: Samuel Au
58.
59.
60.
Sensor-Fusions for
an EMG Controlled
Robotic Prosthesis
Tethered Robotic
System for
Understanding
Human Movements
Variable-Impedance
Prosthetic (VIPr)
Socket Design
Current unmotorized prostheses do not provide adequate energy return during late
stance to improve level-ground locomotion. Robotic prostheses can provide power
during late-stance to improve metabolic economy in an amputee during level-ground
walking. This project seeks to improve the types of terrain a robotic ankle can
successfully navigate by using command signals taken from the intact and residual
limbs of an amputee. By combining these command signals with sensors attached to
the robotic ankle, it might be possible to further understand the role of physiological
signals in the terrain adaptation of robotic ankles.
The goal of this project is to build a powerful system as a scientific tool for bridging the
gap in the literature by determining the dynamic biomechanics of the lower-limb joints
and metabolic effects of physical interventions during natural locomotion. This system is
meant for use in applying forces to the human body and measuring the force,
displacement, and other physiological properties simultaneously, helping investigate
controllability and efficacy of mechanical devices physically interacting with a human
subject.
Today, 100 percent of amputees experience some form of prosthetic socket discomfort.
This project involves the design and production of a comfortable, variable impedance
prosthetic (VIPr) socket using digital anatomical data for a transtibial amputee using
computer-aided design and manufacturing (CAD/CAM). The VIPr socket uses multiple
materials to achieve compliance, thereby increasing socket comfort for amputees, while
maintaining structural integrity. The compliant features are seamlessly integrated into
the 3D-printed socket to achieve lower interface peak pressures over bony
October 2015
Page 13
61.
Volitional Control of a
Powered Ankle-Foot
Prosthesis
62.
Collective Memory
63.
64.
Data Visualization:
The Pixel Factory
DIVE
The rise of computational methods has generated a new natural resource: data. While
it's unclear if big data will open up trillion-dollar markets, it is clear that making sense of
data isn't easy, and that data visualizations are essential to squeeze meaning out of
data. But the capacity to create data visualizations is not widespread; to help develop it
we introduce the Pixel Factory, a new initiative focusing on the creation of
data-visualization resources and tools in collaboration with corporate members. Our
goals are to create software resources for development of online data-visualization
platforms that work with any type of data; and to create these resources as a means to
learn. The most valuable outcome of this work will not be the software resources
produced, incredible as these could be, but the generation of people with the capacity
to make these resources.
Page 14
October 2015
65.
FOLD
Alexis Hope, Kevin Hu, Joe Goldbeck, Nathalie Huynh, Matthew Carroll, Cesar A.
Hidalgo, Ethan Zuckerman
FOLD is an authoring and publishing platform for creating modular, multimedia stories.
Some readers require greater context to understand complex stories. Using FOLD,
authors can search for and add "context cards" to their stories. Context cards can
contain videos, maps, tweets, music, interactive visualizations, and more. FOLD also
allows authors to link stories together by remixing context cards created by other
writers.
66.
GIFGIF
67.
Immersion
68.
Opus
69.
Pantheon
Ali Almossawi, Andrew Mao, Defne Gurel, Cesar A. Hidalgo, Kevin Zeng Hu,
Deepak Jagdish, Amy Yu, Shahar Ronen and Tiffany Lu
We were not born with the ability to fly, cure disease, or communicate at long distances,
but we were born in a society that endows us with these capacities. These capacities
are the result of information that has been generated by humans and that humans have
been able to embed in tangible and digital objects. This information is all around us: it's
the way in which the atoms in an airplane are arranged or the way in which our
cellphones whisper dance instructions to electromagnetic waves. Pantheon is a project
celebrating the cultural information that endows our species with these fantastic
capacities. To celebrate our global cultural heritage, we are compiling, analyzing, and
visualizing datasets that can help us understand the process of global cultural
development.
70.
Place Pulse
October 2015
Page 15
71.
StreetScore
72.
73.
74.
75.
Page 16
The Economic
Complexity
Observatory
With more than six billion people and 15 billion products, the world economy is anything
but simple. The Economic Complexity Observatory is an online tool that helps people
explore this complexity by providing tools that can allow decision makers to understand
the connections that exist between countries and the myriad of products they produce
and/or export. The Economic Complexity Observatory puts at everyone's fingertips the
latest analytical tools developed to visualize and quantify the productive structure of
countries and their evolution.
Most interactions between cultures require overcoming a language barrier, which is why
multilingual speakers play an important role in facilitating such interactions. In addition,
certain languages (not necessarily the most spoken ones) are more likely than others to
serve as intermediary languages. We present the Language Group Network, a new
approach for studying global networks using data generated by tens of millions of
speakers from all over the world: a billion tweets, Wikipedia edits in all languages, and
translations of two million printed books. Our network spans over eighty languages, and
can be used to identify the most connected languages and the potential paths through
which information diffuses from one culture to another. Applications include promotion
of cultural interactions, prediction of trends, and marketing.
We used 15 months of data from 1.5 million people to show that four
points--approximate places and times--are enough to identify 95 percent of individuals
in a mobility database. Our work shows that human behavior puts fundamental natural
constraints on the privacy of individuals, and these constraints hold even when the
resolution of the dataset is low. These results demonstrate that even coarse datasets
provide little anonymity. We further developed a formula to estimate the uniqueness of
human mobility traces. These findings have important implications for the design of
frameworks and institutions dedicated to protecting the privacy of individuals.
October 2015
76.
bioLogic
Lining Yao
This research introduces bacterial endospores as microscale biological actuators to
build stimuli-responsive, shape-changing interfaces. We demonstrate the unique
programmability of spore actuators for achieving transformations on human scale
surfaces through precise spore deposition and geometric pattern designs. We describe
the process from wet-lab spore cultivation to machine-shop printing spore solutions, as
well as exemplifying their applications. This research intends to contribute to the
understanding of the control and programming of natural materials. We hope to
encourage interface designers to consider adapting living organisms to build actuation
mechanisms and to help grow the nascent field of applying biological technologies to
HCI.
77.
inFORM
78.
79.
jamSheets:
Interacting with Thin
Stiffness-Changing
Material
Jifei Ou, Lining Yao, Daniel Tauber, Juergen Steimle, Ryuma Niiyama, Hiroshi
Ishii
LineFORM
We propose a novel shape-changing interface that consists of a single line. Lines have
several interesting characteristics from the perspective of interaction design:
abstractness of data representation; a variety of inherent interactions/affordances; and
constraints such as boundaries or borderlines. By utilizing such aspects of lines
together with the added transformation capability, we present various applications in
different scenarios such as shape-changing cords, mobiles, body constraints, and data
manipulation to investigate the design space of line-based shape-changing interfaces.
80.
MirrorFugue
October 2015
Page 17
Viewing MirrorFugue evokes the sense of walking into a memory, where the pianist
plays without awareness of the viewer's presence; or, it is as if viewers were ghosts in
another's dream, able to sit down in place of the performing pianist and play along.
81.
82.
83.
MMODM: Massively
Multiplayer Online
Drum Machine
Pneumatic
Shape-Changing
Interfaces
Jifei Ou, Lining Yao, Ryuma Niiyama, Sean Follmer and Hiroshi Ishii
Radical Atoms
Hiroshi Ishii
MMODM is an online drum machine based on the Twitter streaming API, using tweets
from around the world to create and perform musical sequences together in real time.
Users anywhere can express 16-beat note sequences across 26 different instruments,
using plain-text tweets from any device. Meanwhile, users on the site itself can use the
graphical interface to locally DJ the rhythm, filters, and sequence blending. By
harnessing this duo of website and Twitter network, MMODM enables a whole new
scale of synchronous musical collaboration between users locally, remotely, across a
wide variety of computing devices, and across a variety of cultures.
Radical Atoms is our vision of interactions with future materials. Radical Atoms takes a
leap beyond Tangible Bits by assuming a hypothetical generation of materials that can
change form and appearance dynamically, becoming as reconfigurable as pixels on a
screen. Radical Atoms is a computationally transformable and reconfigurable material
that is bidirectionally coupled with an underlying digital model (bits) so that dynamic
changes of physical form can be reflected in digital states in real time, and vice versa.
Alumni Contributors: Keywon Chung, Adam Kumpf, Amanda Parkes, Hayes Raffle and
Jamie B Zigelbaum
84.
TRANSFORM
Hiroshi Ishii, Sean Follmer, Daniel Leithinger, Philipp Schoessler, Amit Zoran and
LEXUS International
TRANSFORM fuses technology and design to celebrate its transformation from still
furniture to a dynamic machine driven by a stream of data and energy. TRANSFORM
aims to inspire viewers with unexpected transformations and the aesthetics of the
complex machine in motion. First exhibited at LEXUS DESIGN AMAZING MILAN (April
2014), the work comprises three dynamic shape displays that move over one thousand
pins up and down in real time to transform the tabletop into a dynamic tangible display.
The kinetic energy of the viewers, captured by a sensor, drives the wave motion
represented by the dynamic pins. The motion design is inspired by dynamic interactions
among wind, water, and sand in nature, Escher's representations of perpetual motion,
and the attributes of sand castles built at the seashore. TRANSFORM tells of the
conflict between nature and machine, and its reconciliation, through the ever-changing
tabletop landscape.
85.
Page 18
TRANSFORM:
Adaptive and
Dynamic Furniture
Luke Vink, Viirj Kan, Ken Nakagaki, Daniel Leithinger, Sean Follmer, Philipp
Schoessler, Amit Zoran, Hiroshi Ishii
Introducing TRANSFORM, a shape-changing desk. TRANSFORM is an exploration of
how shape display technology can be integrated into our everyday lives as interactive,
transforming furniture. These interfaces not only serve as traditional computing devices,
but also support a variety of physical activities. By creating shapes on demand or by
moving objects around, TRANSFORM changes the ergonomics and aesthetic
dimensions of furniture, supporting a variety of use cases at home and work: it holds
and moves objects like fruit, game tokens, office supplies, and tablets, creates dividers
on demand, and generates interactive sculptures to convey messages and audio.
October 2015
86.
87.
88.
Context-Aware
Biology
Context-Aware
Pipette
GeneFab
Bram Sterling, Kelly Chang, Joseph M. Jacobson, Peter Carr, Brian Chow, David
Sun Kong, Michael Oh and Sam Hwang
Current biological research workflows make use of disparate, poorly integrated systems
that impose a large mental burden on the scientist, leading to mistakes, often on long,
complex, and costly experimental procedures. The lack of open tools to assist in the
collection of distributed experimental conditions and data is largely responsible for
making protocols difficult to debug, and laboratory practice hard to learn. In this work,
we describe an open Protocol Descriptor Language (PDL) and system to enable a
context-rich, quantitative approach to biological research. We detail the development of
a closed-loop pipetting technology and a wireless sample-temperature sensor that
integrate with our Protocol Description platform, enabling novel, real-time experimental
feedback to the researcher, thereby reducing mistakes and increasing overall scientific
reproducibility.
Pipettes are the equivalent in biology of the keyboard for computer science: a key tool
that enables interface with the subject matter. In the case of the pipette, it enables the
scientist to move precise amounts of liquids. Pipette design hasn't changed in over 30
years. We've designed a new type of pipette that allows wireless, context-aware
operation.
What would you like to "build with biology"? The goal of the GeneFab project is to
develop technology for the rapid fabrication of large DNA molecules, with composition
specified directly by the user. Our intent is to facilitate the field of synthetic biology as it
moves from a focus on single genes to designing complete biochemical pathways,
genetic networks, and more complex systems. Sub-projects include: DNA error
correction, microfluidics for high throughput gene synthesis, and genome-scale
engineering (rE. coli).
Alumni Contributor: Chris Emig
89.
NanoFab
90.
91.
Scaling Up DNA
Logic and Structures
Synthetic
Photosynthesis
Our goals include novel gene logic and data logging systems, as well as DNA scaffolds
that can be produced on commercial scales. State of the art in the former is limited by
finding analogous and orthogonal proteins for those used in current single-layer gates
and two-layered circuits. State of the art in the latter is constrained in size and efficiency
by kinetic limits on self-assembly. We have designed and plan to demonstrate
cascaded logic on chromosomes and DNA scaffolds that exhibit exponential growth.
We are using nanowires to build structures for synthetic photosynthesis for the solar
generation of liquid fuels.
October 2015
Page 19
92.
Microculture
NEW LISTING
93.
Storyboards
Josh Sarantitis, Sepandar Kamvar, Yonatan Cohen, Kathryn Grantham and Lisa
DePiano
Microculture gardens are a network of small-scale permaculture gardens that are aimed
at reimagining our urban food systems, remediating our air supply, and making our
streets more amenable to human-scale mobility. Microculture combines
micro-gardening with the principles of permaculture, creatively occupying viable space
throughout our communities for small-scale self-sustaining food forests. Micro-gardens
have proven to be successful for the production of a broad range of species, including
leafy vegetables, fruit, root vegetables, herbs, and more. Traditionally, container-based
micro-gardens occupy approximately one meter of space or less and are made from
found, up-cycled materials. Our innovations involve the combining of permaculture and
micro-gardening principles, developing materials and designs that allow for modularity,
mobility, easy replicability, placement in parking spots, and software that supports the
placement, creation, and maintenance of these gardens.
Sepandar Kamvar, Kevin Slavin, Jonathan Bobrow and Shantell Martin
Giving opaque technology a glass house, Storyboards present the tinkerers or owners
of electronic devices with stories of how their devices work. Just as the circuit board is a
story of star-crossed lovers Anode and Cathode with its cast of characters (resistor,
capacitor, transistor), Storyboards have their own characters driving a parallel visual
narrative.
94.
95.
The Dog
Programming
Language
Wildflower
Montessori
Sep Kamvar, Kim Smith, Yonatan Cohen, Kim Holleman, Nazmus Saquib,
Caroline Jaffe
Dog is a new programming language that makes it easy and intuitive to create social
applications. A key feature of Dog is built-in support for interacting with people. Dog
provides a natural framework in which both people and computers can be sent requests
and return results. It can perform a long-running computation while also displaying
messages, requesting information, or sending operations to particular individuals or
groups. By switching between machine and human computation, developers can create
powerful workflows and model complex social processes without worrying about
low-level technical details.
96.
Page 20
October 2015
97.
ARkits: Architectural
Robotics Kits
Kent Larson, Luis Alberto Alonso Pastor, Ivan Fernandez, Hasier Larrea and
Carlos Rubio
In an urbanized world, where space is too valuable to be static and unresponsive,
ARkits provide a robotic kit of parts to empower real estate developers, furniture
manufacturers, architects, and "space makers" in general, to create a new generation
of transformable and intelligent spaces.
98.
BoxLab
99.
CityFARM
Camillee Richman, Elaine Kung, Emma Feshbach, Jordan Rogoff, Mathew Daiter,
Kent Larson, Caleb Harper, Edward Platt, Preethi Vaidyanathan and Sophia Jaffee
By 2030, nine billion people will populate the globe and six out of every 10 will live in
cities. The future of global food production will mandate a paradigm shift to
resource-leveraged and environmentally sound urban food-growing solutions. The
CityFARM project explores building-integrated agriculture and environmentally
optimized growing. We are exploring what it means technologically, environmentally,
and socially to design industrially scalable agricultural systems in the heart of urban
areas. Through innovative research, and through development of hydroponic and
aeroponic systems, diagnostic and networked sensing, building integration, and
reductive energy design, CityFARM methodology reduces water consumption by 90
percent, eliminates chemical pesticides, and reduces embodied energy in produce by a
factor of four. By fundamentally rethinking "grow it THERE and eat it HERE," we can
eliminate environmental contaminants and increase access to nutrient-dense produce
in our future cities.
Kent Larson, Hasier Larrea, Daniel Goodman, Oier Ario, Phillip Ewing
Live large in 200 square feet! An all-in-one disentangled robotic furniture piece makes it
possible to live comfortably in a tiny footprint not only by magically reconfiguring the
space, but also by serving as a platform for technology integration and experience
augmentation. Two hundred square feet has never seemed so large.
101. CityOffice
Kent Larson, Waleed Gowharji, Carson Smuts, J. Ira Winder and Yan Zhang
The "Barcelona" demo is an independent prototype designed to model and simulate
human interactions within a Barcelona-like urban environment. Different types of land
use (residential, office, and amenities) are configured into urban blocks and analyzed
with agent-based techniques.
October 2015
Page 21
103. CityScope
BostonBRT
Ryan Chin, Allenza Michel, Ariel Noyman, Jeffrey Rosenblum, Anson Stewart,
Phil Tinn, Ira Winder, Chris Zegras
CityScope is working with the Barr Foundation of Boston to develop a
tangible-interactive participatory environment for planning bus rapid transit (BRT).
Ira Winder
Ira Winder
Page 22
The Dynamic 3D prototype allows users to edit a digital model by moving physical 3D
abstractions of building typologies. Movements are automatically detected, scanned,
and digitized so as to generate inputs for computational analysis. 3D information is also
projected back onto the model to give the user feedback while edits are made.
We recently led a workshop in Saudi Arabia, with staff from the Riyadh Development
Authority, to test a new version of our CityScope platform. With only an hour to work,
four teams of five professionals competed to develop a redevelopment proposal for a
neighborhood near the city center. The platform evaluated their designs according to
energy, daylighting, and walkability.
October 2015
111. Context-Aware
Dynamic Lighting
Kent Larson, Ryan C.C. Chin, Chih-Chao Chuang, William Lark, Jr., Brandon
Phillip Martin-Anderson and SiZhi Zhou
Cities are hubs for innovation, characterized by densely populated areas where people
and firms cluster together, share resources, and collaborate. In turn, dense cities show
higher rates of economic growth and viability. Yet, the specific places innovation occurs
in urban areas, and what the socioeconomic conditions are that encourage it, are still
elusive for both researches and policymakers. Understanding the social and spatial
settings that enable innovation to accrue will equip policymakers and developers with
the metrics to promote and sustain innovation in cities. This research will measure the
attributes of innovation districts across the US in terms of their land-use configurations
and population characteristics and behaviors. These measurements will be used to
identify the factors that enable innovation, with the goal of developing a methodological
approach for producing quantitative planning guidelines to support decision-making
processes.
October 2015
Page 23
Agnis Stibe, Matthias Wunsch, Alexandra Millonig, Chengzhen Dai, Stefan Seer,
Katja Schechtner, Ryan C. C. Chin and Kent Larson
The effects of global climate change, in combination with rapid urbanization, have
forced cities to seek low-energy and less carbon-intensive modes of transport. Cities
have adopted policies like congestion pricing to encourage its citizens to give up private
automobiles and to use mass transit or bicycling and walking. In this research study, we
examine how persuasion technologies can be utilized to encourage positive modal
shifts in mobility behavior in cities. We are particularly interested in studying the key
persuasive strategies that enable, motivate, and trigger users to shift from high-energy
to low-energy modes. This project is a collaboration between the MIT Media Lab and
the Austrian Institute of Technology (AIT).
Alumni Contributors: Sandra Richter and Katja Schechtner
117. ViewCube
119. DbDb
NEW LISTING
Page 24
October 2015
120. GIFGIF
122. Me.TV
NEW LISTING
123. NewsClouds
124. Plethora
125. QUANTIFY
October 2015
Page 25
126. SolarCoin
NEW LISTING
127. Sphera
128. SuperGlue
130. VR Codes
Page 26
October 2015
to that specific entity. The Wall of Now is a single-view experience that challenges
previous perceptions of screen space utilization towards a future of extremely large,
high-resolution displays.
132. Ambisonic
Surround-Sound
Audio Compression
NEW LISTING
Breathing Window is a tool for non-verbal dialogue that reflects on your own breathing
while also offering a window on another person's respiration. This prototype is an
example of shared human experiences (SHEs) crafted to improve the quality of human
understanding and interactions. Our work on SHEs focuses on first encounters with
strangers. We meet strangers every day, and without prior background knowledge of
the individual we often form opinions based on prejudices and differences. In this work,
we bring respiration to the foreground as one common experience of all living
creatures.
Tod Machover, Akito Van Troyer, Benjamin Bloomberg, Charles Holbrow, David
Nunez, Simone Ovsey, Sarah Platte, Bryn Bliska, Rbecca Kleinberger, Peter
Alexander Torpey and Garrett Parrish
Until now, the impact of crowdsourced and interactive music projects has been limited:
the public contributes a small part of the final result, and is often disconnected from the
artist leading the project. We believe that a new musical ecology is needed for true
creative collaboration between experts and amateurs. Toward this goal, we have been
creating "city symphonies," each collaboratively composed with an entire city. We
designed the infrastructure needed to bring together an unprecedented number of
people, including a variety of web-based music composition applications, a social
media framework, and real-world community-building activities. We have premiered city
symphonies in Toronto, Edinburgh, Perth, and Lucerne, and are now developing, with
the support of the Knight Foundation, a symphony for Detroit, our first US city. We are
also working on scaling this process by mentoring independent groups, beginning with
the city of Akron, Ohio.
October 2015
Page 27
Tod Machover, Peter Torpey, Ben Bloomberg, Elena Jessop, Charles Holbrow,
Simone Ovsey, Garrett Parrish, Justin Martinez, and Kevin Nattinger
Tod Machover, Ben Bloomberg, Peter Torpey, Elena Jessop, Bob Hsiung, Akito
van Troyer
Tod Machover, Akito Van Troyer, Benjamin Bloomberg, Bryn Bliska, Charles
Holbrow, David Nunez, Rbecca Kleinberger, Simone Ovsey, Sarah Platte, Peter
Torpey, Kelly Donovan, Meejin Yoon and the Empathy and Experience class
The live global interactive simulcast of the final February 2014 performance of "Death
and the Powers" in Dallas made innovative use of satellite broadcast and Internet
technologies to expand the boundaries of second-screen experience and interactivity
during a live remote performance. In the opera, Simon Powers uploads his mind,
memories, and emotions into The System, represented onstage through reactive
robotic, visual, and sonic elements. Remote audiences, via simulcast, were treated as
part of The System alongside Powers and the operabots. Audiences had an omniscient
view of the action of the opera, as presented through the augmented, multi-camera
video and surround sound. Multimedia content delivered to mobile devices, through the
Powers Live app, privileged remote audiences with perspectives from within The
System. Mobile devices also allowed audiences to influence The System by affecting
the illumination of the Winspear Opera House's Moody Foundation Chandelier.
139. Fensadense
Tod Machover, Ben Bloomberg, Peter Torpey, Garrett Parrish, Kevin King
Fensadense is a new work for 10-piece ensemble composed by Tod Machover,
commissioned for the Lucerne Festival in summer 2015. The project represents the
next generation of hyperinstruments, involving the measurement of relative qualities of
many performers where previous systems only looked at a single performer.
Off-the-shelf components were used to collect data about movement and muscle
tension of each musician. The data was analyzed using the Hyperproduction platform to
create meaningful production control for lighting and sound systems based on the
connection of the performers, with a focus on qualities such as momentum, connection,
and tension of the ensemble as a whole. The project premiered at the Lucerne Festival,
and a European tour is scheduled for spring 2016.
Page 28
October 2015
141. Hyperinstruments
Tod Machover
The Hyperinstruments project creates expanded musical instruments and uses
technology to give extra power and finesse to virtuosic performers. They were designed
to augment a wide range of traditional musical instruments and have been used by
some of the world's foremost performers (Yo-Yo Ma, the Los Angeles Philharmonic,
Peter Gabriel, and Penn & Teller). Research focuses on designing computer systems
that measure and interpret human expression and feeling, exploring appropriate
modalities and content of interactive art and entertainment environments, and building
sophisticated interactive musical instruments for non-professional musicians, students,
music lovers, and the general public. Recent projects involve the production a new
version of the "classic" Hyperstring Trilogy for the Lucerne Festival, and the design of a
new generation of Hyperinstruments, for Fensadense and other projects, that
emphasizes measurement and interpretation of inter-player expression and
communication, rather than simply the enhancement of solo performance.
Alumni Contributors: Roberto M. Aimi, Mary Farbood, Ed Hammond, Tristan Jehan,
Margaret Orth, Dan Overholt, Egon Pasztor, Joshua Strickon, Gili Weinberg and Diana
Young
142. Hyperproduction:
Advanced Production
Systems
143. Hyperscore
Tod Machover
October 2015
Page 29
Page 30
October 2015
Our voice is an important part of our individuality. From the voices of others, we
understand a wealth of non-linguistic information, such as identity, social-cultural clues,
and emotional state. But the relationship we have with our own voice is less obvious.
We don't hear it the way others do, and our brain treats it differently from any other
sound. Yet its sonority is deeply connected with how we are perceived by society and
how we see ourselves, body and mind. This project is composed of software, devices,
installations, and thoughts used to challenge us to gain new insights on our voices. To
increase self-awareness, we propose different ways to extend, project, and visualize
the voice. We show how our voices sometimes escape our control, and we explore the
consequences in terms of self-reflection, cognitive processes, therapy, affective
features visualization, and communication improvement.
Vocal Vibrations explores relationships between human physiology and the vibrations
of the voice. The voice is an expressive instrument that nearly everyone possesses and
that is intimately linked to the physical form. In collaboration with Le Laboratoire and the
MIT Dalai Lama Center, we examine the hypothesis that voices can influence mental
and physical health through physico-physiological phenomena. The first Vocal
Vibrations installation premiered in Paris, France, in March 2014. The public "Chapel"
space of the installation encouraged careful meditative listening. A private "Cocoon"
environment guided an individual to explore his/her voice, augmented by tactile and
acoustic stimuli. Vocal Vibrations then had a successful showing as the inaugural
installation at the new Le Laboratoire Cambridge from November 2014 through March
2015. The installation was incorporated into Le Laboratoire's Memory/Witness of the
Unimaginable exhibit, April 17-August 16, 2015.
Alumni Contributor: Eyal Shahar
October 2015
Page 31
reference image. It acts both as a physical spraying device and as an intelligent digital
guiding tool that provides manual and computerized control. Using an inverse rendering
approach allows for a new augmented painting experience with unique results. We
present our novel hardware design, control software, and a discussion of the
implications of human-computer collaborative painting.
153. Enlight
Tal Achituv, Natan Linder, Rony Kubat, Pattie Maes and Yihui Saw
In physics education, virtual simulations have given us the ability to show and explain
phenomena that are otherwise invisible to the naked eye. However, experiments with
analog devices still play an important role. They allow us to verify theories and discover
ideas through experiments that are not constrained by software. What if we could
combine the best of both worlds? We achieve that by building our applications on a
projected augmented reality system. By projecting onto physical objects, we can paint
the phenomena that are invisible. With our system, we have built "physical
playgrounds": simulations that are projected onto the physical world and that respond to
detected objects in the space. Thus, we can draw virtual field lines on real magnets,
track and provide history on the location of a pendulum, or even build circuits with both
physical and virtual components.
155. FingerReader
FingerReader is a finger-worn device that helps the visually impaired to effectively and
efficiently read paper-printed text. It works in a local-sequential manner for scanning
text that enables reading of single lines or blocks of text, or skimming the text for
important sections while providing auditory and haptic feedback.
Page 32
2D screens, even stereoscopic ones, limit our ability to interact with and collaborate on
3D data. We believe that an augmented reality solution, where 3D data is seamlessly
integrated in the real world, is promising. We are exploring a collaborative augmented
reality system for visualizing and manipulating 3D data using a head-mounted,
see-through display, that allows for communication and data manipulation using simple
hand gestures.
October 2015
158. HRQR
NEW LISTING
162. LuminAR
JaJan! is a telepresence system wherein remote users can learn a second language
together while sharing the same virtual environment. JaJan! can support five aspects of
language learning: learning in context; personalization of learning materials; learning
with cultural information; enacting language-learning scenarios; and supporting
creativity and collaboration. Although JaJan! is still in an early stage, we are confident
that it will bring profound changes to the ways in which we experience language
learning and can make a great contribution to the field of second language education.
LuminAR reinvents the traditional incandescent bulb and desk lamp, evolving them into
a new category of robotic, digital information devices. The LuminAR Bulb combines a
Pico-projector, camera, and wireless computer in a compact form factor. This
self-contained system enables users with just-in-time projected information and a
gestural user interface, and it can be screwed into standard light fixtures everywhere.
The LuminAR Lamp is an articulated robotic arm, designed to interface with the
LuminAR Bulb. Both LuminAR form factors dynamically augment their environments
with media and information, while seamlessly connecting with laptops, mobile phones,
and other electronic devices. LuminAR transforms surfaces and objects into interactive
spaces that blend digital media and information with the physical space. The project
radically rethinks the design of traditional lighting objects, and explores how we can
endow them with novel augmented-reality interfaces.
Rony Daniel Kubat, Natan Linder, Ben Weissmann, Niaja Farve, Yihui Saw and
Pattie Maes
Move Your Glass is an activity and behavior tracker that also tries to increase wellness
by nudging the wearer to engage in positive behaviors.
October 2015
Page 33
Valentin Heun, Shunichi Kasahara, James Hobin, Kevin Wong, Michelle Suh,
Benjamin F Reynolds, Marc Teyssier, Eva Stern-Rodriguez, Afika A Nyati, Kenny
Friedman, Anissa Talantikite, Andrew Mendez, Jessica Laughlin, Pattie Maes
Open Hybrid is an open source augmented reality platform for physical computing and
Internet of Things. It is based on the web and Arduino.
Tal Achituv
The Reality Editor system supports editing the behavior and interfaces of so-called
"smart objects": objects or devices that have an embedded processor and
communication capability. Using augmented reality techniques, the Reality Editor maps
graphical elements directly on top of the tangible interfaces found on physical objects,
such as push buttons or knobs. The Reality Editor allows flexible reprogramming of the
interfaces and behavior of the objects, as well as defining relationships between smart
objects in order to easily create new functionalities.
Scanner Grabber is a digital police scanner that enables reporters to record, playback,
and export audio, as well as archive public safety radio (scanner) conversations. Like a
TiVo for scanners, it's an update on technology that has been stuck in the last century.
It's a great tool for newsrooms. For instance, a problem for reporters is missing the
beginning of an important police incident because they have stepped away from their
desk at the wrong time. Scanner Grabber solves this because conversations can be
played back. Also, snippets of exciting audio, for instance a police chase, can be
exported and embedded online. Reporters can listen to files while writing stories, or
listen to older conversations to get a more nuanced grasp of police practices or
long-term trouble spots. Editors and reporters can use the tool for collaborating, or
crowdsourcing/public collaboration.
169. ScreenSpire
Pattie Maes, Tal Achituv, Chang Long Zhu Jin and Isa Sobrinho
Screen interactions have been shown to contribute to increases in stress, anxiety, and
deficiencies in breathing patterns. Since better respiration patterns can have a positive
impact on wellbeing, ScreenSpire improves respiration patterns during information work
using subliminal biofeedback. By using subtle graphical variations that are tuned to
attempt to influence the user subconsciously user distraction and cognitive load are
minimized. To enable a true seamless interaction, we have adapted an RF based
sensor (ResMed S+ sleep sensor) to serve as a screen-mounted contact-free and
respiration sensor. Traditionally, respiration sensing is achieved with either invasive or
on-skin sensors (such as a chest belt); having a contact-free sensor contributes to
increased ease, comfort, and user compliance, since no special actions are required
from the user.
Page 34
October 2015
171. SmileCatcher
173. TagMe
TagMe is an end-user toolkit for easy creation of responsive objects and environments.
It consists of a wearable device that recognizes the object or surface the user is
touching. The user can make everyday objects come to life through the use of RFID tag
stickers, which are read by an RFID bracelet whenever the user touches the object. We
present a novel approach to create simple and customizable rules based on emotional
attachment to objects and social interactions of people. Using this simple technology,
the user can extend their application interfaces to include physical objects and surfaces
into their personal environment, allowing people to communicate through everyday
objects in very low-effort ways.
October 2015
Page 35
175. 3D Printing of
Functionally Graded
Materials
176. Additive
Manufacturing in
Glass:
Electrosintering and
Spark Gap Glass
177. Anthozoa
Neri Oxman
A 3D-printed dress was debuted during Paris Fashion Week Spring 2013 as part of
collaboration with fashion designer Iris Van Herpen for her show "Voltage." The
3D-printed skirt and cape were produced using Stratasys' unique Objet Connex
multi-material 3D printing technology, which allows a variety of material properties to be
printed in a single build. This allowed both hard and soft materials to be incorporated
within the design, crucial to the movement and texture of the piece. Core contributers
include: Iris Van Herpen, fashion designer (Amsterdam); Keren Oxman, artist and
designer (NY); and W. Craig Carter (Department of Materials Science and Engineering,
MIT). Fabricated by Stratasys.
178. Beast
Neri Oxman
Beast is an organic-like entity created synthetically by the incorporation of physical
parameters into digital form-generation protocols. A single continuous surface, acting
both as structure and as skin, is locally modulated for both structural support and
corporeal aid. Beast combines structural, environmental, and corporeal performance by
adapting its thickness, pattern density, stiffness, flexibility, and translucency to load,
curvature, and skin-pressured areas respectively.
Neri Oxman, Jorge Duro-Royo, Markus Kayser, Jared Laucks and Laia
Mogas-Soldevila
The Biblical story of the Tower of Babel involved a deliberate plan hatched by mankind
to construct a platform from which man could fight God. The tower represented the first
documented attempt at constructing a vertical city. The divine response to the master
Page 36
October 2015
180. Building-Scale 3D
Printing
Neri Oxman
Carpal Skin is a prototype for a protective glove to protect against Carpal Tunnel
Syndrome, a medical condition in which the median nerve is compressed at the wrist,
leading to numbness, muscle atrophy, and weakness in the hand. Night-time wrist
splinting is the recommended treatment for most patients before going into carpal
tunnel release surgery. Carpal Skin is a process by which to map the pain-profile of a
particular patient--its intensity and duration--and to distribute hard and soft materials to
fit the patient's anatomical and physiological requirements, limiting movement in a
customized fashion. The form-generation process is inspired by animal coating patterns
in the control of stiffness variation.
Neri Oxman
183. Digitally
Reconfigurable
Surface
CNSILK explores the design and fabrication potential of silk fibers inspired by silkworm
cocoons for the construction of woven habitats. It explores a novel approach to the
design and fabrication of silk-based building skins by controlling the mechanical and
physical properties of spatial structures inherent in their microstructures using multi-axis
fabrication. The method offers construction without assembly, such that material
properties vary locally to accommodate for structural and environmental requirements.
This approach stands in contrast to functional assemblies and kinetically actuated
facades which require a great deal of energy to operate, and are typically maintained by
global control. Such material architectures could simultaneously bear structural load,
change their transparency so as to control light levels within a spatial compartment
(building or vehicle), and open and close embedded pores so as to ventilate a space.
The digitally reconfigurable surface is a pin matrix apparatus for directly creating rigid
3D surfaces from a computer-aided design (CAD) input. A digital design is uploaded
into the device, and a grid of thousands of tiny pins, much like the popular pin-art
toy are actuated to form the desired surface. A rubber sheet is held by vacuum
pressure onto the tops of the pins to smooth out the surface they form; this strong
surface can then be used for industrial forming operations, simple resin casting, and
many other applications. The novel phase-changing electronic clutch array allows the
device to have independent position control over thousands of discrete pins with only a
single motorized "push plate," lowering the complexity and manufacturing cost of this
type of device. Research is ongoing into new actuation techniques to further lower the
cost and increase the surface resolution of this technology.
October 2015
Page 37
184. FABRICOLOGY:
Variable-Property 3D
Printing as a Case for
Sustainable
Fabrication
Neri Oxman
Rapid prototyping technologies speed product design by facilitating visualization and
testing of prototypes. However, such machines are limited to using one material at a
time; even high-end 3D printers, which accommodate the deposition of multiple
materials, must do so discretely and not in mixtures. This project aims to build a
proof-of-concept of a 3D printer able to dynamically mix and vary the ratios of different
materials in order to produce a continuous gradient of material properties with real-time
correspondence to structural and environmental constraints.
Alumni Contributors: Mindy Eng, William J. Mitchell and Rachel Fong
185. FitSocket:
Measurement for
Attaching Objects to
People
Neri Oxman, Carlos Gonzalez Uribe and Hugh Herr and the Biomechatronics
group
187. Gemini
Neri Oxman with Le Laboratoire (David Edwards, Founder), Stratasys, and SITU
Fabrication
Neri Oxman, Markus Kayser, John Klein, Chikara Inamura, Daniel Lizardo, Giorgia
Franchin, Michael Stern, Shreya Dave, Peter Houk, MIT Glass Lab
Digital design and construction technologies for product and building scale are
generally limited in their capacity to deliver multi-functional building skins. Recent
advancements in additive manufacturing and digital fabrication at large are today
enabling the fabrication of multiple materials with combinations of mechanical,
Page 38
October 2015
electrical, and optical properties; however, most of these materials are non-structural
and cannot scale to architectural applications. Operating at the intersection of additive
manufacturing, biology, and architectural design, the Glass Printing project is an
enabling technology for optical glass 3D printing at architectural scale designed to
manufacture multi-functional glass structures and facade elements. The platform
deposits molten glass in a layer-by-layer (FDM) fashion, implementing numerical
control of tool paths, and it allows for controlled optical variation across surface and
volume areas.
189. Lichtenberg 3D
Printing
Neri Oxman, Chikara Inamura, Daniel Lizardo, Michael Stern, Giorgia Franchin,
Shreya Dave, Pierre-Thomas Brun, Peter Houk, MIT Glass Lab
NEW LISTING
The Living Glass project has its roots in concepts of metabolist design and fabrication.
Dynamic energy systems within permanent constructions have long been a goal for
architects, and our group includes the use of materials engineering in this pursuit. The
project is an enabling technology, infrastructure, and manifesto of the building block of
the future. Living Glass utilizes glass printing by integrating cross-disciplinary research
in material, mechanical, thermal, structural, and computational engineering to increase
the resolution and repeatability towards scalability. In conjunction with the development
of the next-generation manufacturing platform, computational design workflow will be
developed to integrate multi-objective optimization of structural and environmental
performance within the material system. The goal of the project is to manifest this
technology through the deployment of the functionally gradient high-fidelity glass
structure with internal vasculature that serves as the infrastructure of functional fluidics
to harness environmental energy.
Mediated Matter group: Neri Oxman (principal investigator), Will Patrick (project
lead), Steven Keating, and Sunanda Sharma; Stratasys; Christoph Bader and
Dominik Kolb; Prof. Pamela Silver and Stephanie Hays (Harvard Medical School);
and Dr. James Weaver
How can we design relationships between the most primitive and sophisticated life
forms? Can we design wearables embedded with synthetic microorganisms that can
enhance and augment biological functionality, and generate consumable energy when
exposed to the sun? We explored these questions through the creation of Mushtari, a
3D-printed wearable with 58 meters of internal fluid channels. Designed to function as a
microbial factory, Mushtari uses synthetic microorganisms to convert sunlight into
useful products for the wearer, engineering a symbiotic relationship between two
bacteria: photosynthetic cyanobacteria and E. coli. The cyanobacteria convert sunlight
to sucrose, and E. coli convert sucrose to useful products such as pigments, drugs,
food, fuel, and scents. This form of symbiosis, known as co-culture, is a phenomenon
commonly found in nature. Mushtari is part of the Wanderers collection, an
astrobiological exploration dedicated to medieval astronomers who explored worlds
beyond by visiting worlds within.
192. Meta-Mesh:
Computational Model
for Design and
Fabrication of
Biomimetic Scaled
Body Armors
October 2015
Page 39
Altec, BASF, Neri Oxman, Steven Keating, John Klein, Julian Leland and Nathan
Spielberg
195. Monocoque
Neri Oxman
French for "single shell," Monocoque stands for a construction technique that supports
structural load using an object's external skin. Contrary to the traditional design of
building skins that distinguish between internal structural frameworks and non-bearing
skin elements, this approach promotes heterogeneity and differentiation of material
properties. The project demonstrates the notion of a structural skin using a Voronoi
pattern, the density of which corresponds to multi-scalar loading conditions. The
distribution of shear-stress lines and surface pressure is embodied in the allocation and
relative thickness of the vein-like elements built into the skin. Its innovative 3D printing
technology provides for the ability to print parts and assemblies made of multiple
materials within a single build, as well as to create composite materials that present
preset combinations of mechanical properties.
Neri Oxman, Will Patrick, Sunanda Sharma, Steven Keating, Steph Hays,
Elonore Tham, Professor Pam Silver, and Professor Tim Lu
How can biological organisms be incorporated into product, fashion, and architectural
design to enable the generation of multi-functional, responsive, and highly adaptable
objects? This research pursues the intersection of synthetic biology, digital fabrication,
and design. Our goal is to incorporate engineered biological organisms into inorganic
and organic materials to vary material properties in space and time. We aim to use
Page 40
October 2015
synthetic biology to engineer organisms with varied output functionalities and digital
fabrication tools to pattern these organisms and induce their specific capabilities with
spatiotemporal precision.
198. Printing
Multi-Material 3D
Microfluidics
Neri Oxman, Steven Keating, Will Patrick and David Sun Kong (MIT Lincoln
Laboratory)
Neri Oxman
200. Raycounting
Neri Oxman
Raycounting is a method for generating customized light-shading constructions by
registering the intensity and orientation of light rays within a given environment. 3D
surfaces of double curvature are the result of assigning light parameters to flat planes.
The algorithm calculates the intensity, position, and direction of one or multiple light
sources placed in a given environment, and assigns local curvature values to each
point in space corresponding to the reference plane and the light dimension. Light
performance analysis tools are reconstructed programmatically to allow for
morphological synthesis based on intensity, frequency, and polarization of light
parameters as defined by the user.
Neri Oxman, Jorge Duro-Royo, Carlos Gonzalez, Markus Kayser, and Jared
Laucks, with James Weaver (Wyss Institute, Harvard University) and Fiorenzo
Omenetto (Tufts University)
The Silk Pavilion explores the relationship between digital and biological fabrication.
The primary structure was created from 26 polygonal panels made of silk threads laid
down by a CNC (Computer-Numerically Controlled) machine. Inspired by the silkworm's
ability to generate a 3D cocoon out of a single multi-property silk thread, the pavilion's
overall geometry was created using an algorithm that assigns a single continuous
thread across patches, providing various degrees of density. Overall density variation
was informed by deploying the silkworm as a biological "printer" in the creation of a
secondary structure. Positioned at the bottom rim of the scaffold, 6,500 silkworms spun
flat, non-woven silk patches as they locally reinforced the gaps across CNC-deposited
silk fibers. Affected by spatial and environmental conditions (geometrical density,
variation in natural light and heat), the silkworms were found to migrate to darker and
denser areas.
202. SpiderBot
October 2015
Page 41
arrangement is capable of moving large distances without the need for more
conventional linear guides, much like a spider does. The system is easy to set up for
mobile projects, and will afford sufficient printing resolution and build volume.
Expanding foam can be deposited to create a building-scale printed object rapidly.
Another material type of interest is the extrusion or spinning of tension elements, like
rope or cable. With tension elements, unique structures such as bridges or webs can be
wrapped, woven, or strung around environmental features or previously printed
materials.
205. CremateBot:
Transform, Reborn,
Free
Page 42
October 2015
Mary Tsang
Open Source Estrogen: Housewives Making Drugs combines do-it-yourself science,
body and gender politics, and ethics of hormonal manipulation. The goal of the project
is to create an open-source protocol for estrogen biosynthesis. The kitchen is a
politically charged space prescribed to women as their proper dwelling, therefore
making it the precise context to perform an estrogen synthesis recipe. With recent
developments in the field of synthetic biology, the customized kitchen laboratory may
be a ubiquitous possibility in the near future. Open-access estrogen would allow women
and transgender females to exercise greater control over their bodies by circumventing
governments and institutions. We want to ask: What are the biopolitics governing our
bodies? More importantly, is it ethical to self-administer self-synthesized hormones?
Ai Hasegawa and Sputniko!
Facing issues of food crisis by overpopulation, this project explores a possible future
where a small community of activists arises to design an edible cockroach that can
survive in harsh environments. These genetically modified roaches are designed to
pass their genes to the next generations; thus the awful black and brown roaches will
be pushed to extinction by the newly designed, cute, colorful, tasty, and highly
nutritional "pop roach." The color of these "pop roaches" corresponds to a different
flavor, nutrition, and function, while the original ones remain black or brown, and not
recommended to be eaten. How will genetic engineering shift our perception of food
and eating habits? Pop Roach explores how we can expand our perception of cuisine
to solve some of the world's most pressing problems.
Sputniko!
We are currently working on designing and creating a new Bio-Art Pavilion in Teshima,
an art-island which is part of Benesse Art Site at Naoshima, Japan. The pavilion will
include a permanent exhibition space by Sputniko! / Design Fiction group, and will host
open workshops and lectures in the lab space for young children to experiment,
discuss, and imagine the implications of emerging biotechnologies. The project is
commissioned for Setouchi Art Triennale Festival in 2016.
Biotechnology is changing nature, and with it, beauty. How will we, and society as a
whole, respond to these changes? Aphrodite, the ancient Greek goddess of love,
beauty, and procreation, is said to have been born from the ocean and wrapped in the
smell of roses, and captivated the gods of Olympus with her beauty. Our heroine Amy
turns herself into a modern biotech Aphrodite when she creates an irresistible
rose-scented dress from genetically engineered silk to captivate her secret crush's
heart. Her story is not entirely science fiction: we have created Amy's dress using
glowing silk, created in 2008 by the National Institute of Agrobiological Sciences (NIAS)
in Japan by incorporating jellyfish and coral genes into silkworms. We are also working
with NIAS to develop a rose-scented rose silk and a "love-inducing" silk containing
oxytocin, both engineered by Amy in the story.
October 2015
Page 43
Donald Derek H.
RESTful services and the Web provide a framework and structure for content delivery
that is scalable, not only in size but, more importantly, in use cases. As we in
Responsive Environments build systems to collect, process, and deliver sensor data,
this project serves as a research platform that can be shared between a variety of
projects both inside and outside the group. By leveraging hyperlinks between sensor
data clients can browse, explore, and discover their relationships and interactions in
ways that can grow over time.
NEW LISTING
Page 44
The Circuit Sticker Activity Book is a primer for using circuit stickers to create
expressive electronics. Inside are explanations of the stickers, and circuits and
templates for building functional electronics directly on the pages of the book. The book
covers five topics, from simple LED circuits to crafting switches and sensors. As users
complete the circuits, they are also prompted with craft and drawing activities to ensure
an expressive and artistic approach to learning and building circuits. Once completed,
the book serves as an encyclopedia of techniques to apply to future projects.
October 2015
217. disCERN:
Sonification Platform
for High-Energy
Physics Data
218. DoppelLab:
Experiencing
Multimodal Sensor
Data
220. FingerSynth:
Wearable
Transducers for
Exploring the
Environment through
Sound
Homes and offices are being filled with sensor networks to answer specific queries and
solve pre-determined problems, but no comprehensive visualization tools exist for
fusing these disparate data to examine relationships across spaces and sensing
modalities. DoppelLab is a cross-reality virtual environment that represents the
multimodal sensor data produced by a building and its inhabitants. Our system
encompasses a set of tools for parsing, databasing, visualizing, and sonifying these
data; by organizing data by the space from which they originate, DoppelLab provides a
platform to make both broad and specific queries about the activities, systems, and
relationships in a complex, sensor-rich environment.
We are evaluating new methods of interacting and controlling solid-state lighting based
on our findings of how participants experience and perceive architectural lighting in our
new lighting laboratory (E14-548S). This work, aptly named "Experiential Lighting,"
reduces the complexity of modern lighting controls (intensity/color/space) into a simple
mapping, aided by both human input and sensor measurement. We believe our
approach extends beyond general lighting control and is applicable in situations where
human-based rankings and preference are critical requirements for control and
actuation. We expect our foundational studies to guide future camera-based systems
that will inevitably incorporate context in their operation (e.g., Google Glass).
October 2015
Page 45
NEW LISTING
In this project we investigate how the process of building a circuit can be made more
organic, like sketching in a sketchbook. We integrate a rechargeable power supply into
the spine of a traditional sketchbook, so that each page of the sketchbook has power
connections. This enables users to begin creating functioning circuits directly onto the
pages of the book and to annotate as they would in a regular notebook. The sequential
nature of the sketchbook allows creators to document their process for circuit design.
The book also serves as a single physical archive of various hardware designs. Finally,
the portable and rechargeable nature of the book allows users to take their electronic
prototypes off of the lab bench and share their creations with people outside of the lab
environment.
Imagine a future where lights are not fixed to the ceiling, but follow us wherever we are.
In this colorful world we enjoy lighting that is designed to go along with the moment, the
activity, our feelings, and our outfits. Halo is a wearable lighting device created to
explore this scenario. Different from architectural lighting, this personal lighting device
aims to illuminate and present its user. Halo changes the wearer's appearance with the
ease of a button click, similar to adding a filter to a photograph. It can also change the
user's view of the world, brightening up a rainy day or coloring a gray landscape. Halo
can react to activities and adapt based on context. It is a responsive window between
the wearer and his or her surroundings.
223. HearThere:
Ubiquitous Sonic
Overlay
224. ListenTree:
Audio-Haptic Display
in the Natural
Environment
Glorianna Davenport, Joe Paradiso, Gershon Dublon, Pragun Goyal and Brian
Dean Mayton
Page 46
With our Ubiquitous Sonic Overlay, we are working to place virtual sounds in the user's
environment, fixing them in space even as the user moves. We are working toward
creating a seamless auditory display, indistinguishable from the user's actual
surroundings. Between bone-conduction headphones, small and cheap orientation
sensors, and ubiquitous GPS, a confluence of fundamental technologies is in place.
However, existing head-tracking systems either limit the motion space to a small area
(e.g., Occulus Rift), or sacrifice precision for scale using technologies like GPS. We are
seeking to bridge the gap to create large outdoor spaces of sonic objects.
October 2015
people to explore this data, both remotely and onsite. The remote interface allows for
immersive 3D exploration of the terrain, while visitors to the site will be able to access
data from the network around them directly from wearable devices.
Joseph A. Paradiso, Gershon Dublon, Brian Mayton, Spencer Russell and Donald
Derek H.
229. NailO
Light enables our visual perception. It is the most common medium for displaying digital
information. Light regulates our circadian rhythms, affects productivity and social
interaction, and makes people feel safe. Yet despite the significance of light in
structuring human relationships with their environments on all these levels, we
communicate very little with our artificial lighting systems. Occupancy, ambient
illuminance, intensity, and color preferences are the only input signals currently
provided to these systems. With advanced sensing technology, we can establish better
communication with our devices. This effort is often described as context-awareness.
Context has typically been divided into properties such as location, identity, affective
state, and activity. Using wearable and infrastructure sensors, we are interested in
detecting these properties and using them to control lighting. The Mindful Photons
Project aims to close the loop and allow our light sources to "see" us.
MMODM is an online drum machine based on the Twitter streaming API, using tweets
from around the world to create and perform musical sequences together in real time.
Users anywhere can express 16-beat note sequences across 26 different instruments,
using plain-text tweets from any device. Meanwhile, users on the site itself can use the
graphical interface to locally DJ the rhythm, filters, and sequence blending. By
harnessing this duo of website and Twitter network, MMODM enables a whole new
scale of synchronous musical collaboration between users locally, remotely, across a
wide variety of computing devices, and across a variety of cultures.
As part of the Living Observatory ecological sensing initiative, we've been developing
new approaches to mobile, wearable sensor data visualization. The Tidmarsh app for
Google Glass visualizes real-time sensor network data based on the wearer's location
and gaze. A user can approach a sensor node to see 2D plots of its real-time data
stream, and look across an expanse to see 3D plots encompassing multiple devices.
On the back-end, the app showcases our Chain API, crawling linked data resources to
build a dynamic picture of the sensor network. Besides development of new
visualizations, we are building in support for voice queries, and exploring ways to
encourage distributed data collection by users.
October 2015
Page 47
231. SensorChimes:
Musical Mapping for
Sensor Networks
Data-Pop Alliance is a joint initiative on big data and development with a goal of helping
to craft and leverage the new ecosystem of big data--new personal data, new tools,
new actors--to improve decisions and empower people in a way that avoids the pitfalls
of a new digital divide, de-humanization, and de-democratization. Data-Pop Alliance
aims to serve as a designer, broker, and implementer of ideas and activities, bringing
together institutions and individuals around common principles and objectives through
collaborative research, training and capacity building, technical assistance, convening,
knowledge curation, and advocacy. Our thematic areas of focus include official
statistics, socio-economic and demographic methods, conflict and crime, climate
change and environment, literacy, and ethics.
234. Enigma
NEW LISTING
Page 48
October 2015
Erez Shmueli, Alex 'Sandy' Pentland, Dhaval Adjodah and David Shrier
236. Leveraging
Leadership Expertise
More Effectively in
Organizations
NEW LISTING
We believe that the narrative of only listening to experts or trusting the wisdom of the
crowd blindly is flawed. Instead we have developed a system that weighs experts and
lay-people differently and dynamically and show that a good balance is required. We
show that our methodology leads to a 15 percent improvement in mean performance,
15 percent decrease in variance, and almost 30 percent increase in Sharpe-type ratio in
a real online market.
Alex 'Sandy' Pentland, Bruno Lepri and David Shrier
The Mobile Territorial Lab (MTL) aims at creating a living laboratory integrated in the
real life of the Trento territory in Italy, open to manifold kinds of experimentations. In
particular, the MTL is focused on exploiting the sensing capabilities of mobile phones to
track and understand human behaviors (e.g., families' spending behaviors, lifestyles,
mood, and stress patterns); on designing and testing social strategies aimed at
empowering individual and collective lifestyles through attitude and behavior change;
and on investigating new paradigms in personal data management and sharing. This
project is a collaboration with Telecom Italia SKIL Lab, Foundation Bruno Kessler, and
Telefonica I+D.
238. On the
Reidentifiability of
Credit Card Metadata
239. openPDS/
SaferAnswers:
Protecting the
Privacy of Metadata
Even when real names and other personal information are stripped from metadata
datasets, it is often possible to use just a few pieces of information to identify a specific
person. Here, we study three months of credit card records for 1.1 million people and
show that four spatiotemporal points are enough to uniquely reidentify 90 percent of
individuals. We show that knowing the price of a transaction increases the risk of
reidentification by 22 percent, on average. Finally, we show that even data sets that
provide coarse information at any or all of the dimensions provide little anonymity, and
that women are more reidentifiable than men in credit card metadata.
In a world where sensors, data storage, and processing power are too cheap to meter,
how do you ensure that users can realize the full value of their data while protecting
their privacy? openPDS is a field-tested, personal metadata management framework
that allows individuals to collect, store, and give fine-grained access to their metadata
to third parties. SafeAnswers is a new and practical way of protecting the privacy of
metadata at an individual level. SafeAnswers turns a hard anonymization problem into
a more tractable security one. It allows services to ask questions whose answers are
calculated against the metadata, instead of trying to anonymize individuals' metadata.
Together, openPDS and SafeAnswers provide a new way of dynamically protecting
personal metadata.
October 2015
Page 49
241. Sensible
Organizations
Markets are notorious for bubbles and bursts. Other research has found that crowds of
lay-people can replace even leading experts to predict everything from product sales to
the next big diplomatic event. In this project, we leverage both threads of research to
see how prediction markets can be used to predict business and technological
innovations, and use them as a model to fix financial bubbles. For example, a prediction
market was rolled out inside of Intel and the experiment was very successful, and led to
better predictions than the official Intel forecast 75 percent of the time. Prediction
markets also led to as much as a 25 percent reduction in mean squared error over the
prediction of official experts at Google, Ford, and Koch industries.
Data mining of email has provided important insights into how organizations function
and what management practices lead to greater productivity. But important
communications are almost always face-to-face, so we are missing the greater part of
the picture. Today, however, people carry cell phones and wear RFID badges. These
body-worn sensor networks mean that we can potentially know who talks to whom, and
even how they talk to each other. Sensible Organizations investigates how these new
technologies for sensing human interaction can be used to reinvent organizations and
management.
We used 15 months of data from 1.5 million people to show that four
points--approximate places and times--are enough to identify 95 percent of individuals
in a mobility database. Our work shows that human behavior puts fundamental natural
constraints on the privacy of individuals, and these constraints hold even when the
resolution of the dataset is low. These results demonstrate that even coarse datasets
provide little anonymity. We further developed a formula to estimate the uniqueness of
human mobility traces. These findings have important implications for the design of
frameworks and institutions dedicated to protecting the privacy of individuals.
Page 50
October 2015
Javier Hernandez Rivera, Weixuan 'Vincent' Chen, Akane Sano, and Rosalind W.
Picard
NEW LISTING
This project examines how the expression granted by new musical interfaces can be
harnessed to create positive changes in health and wellbeing. We are conducting
experiments to measure EEG dynamics and physical movements performed by
participants who are using software designed to invite physical and musical expression
of the basic emotions. The present demonstration of this system incorporates an
expressive gesture sonification system using a Leap Motion device, paired with an
ambient music engine controlled by EEG-based affective indices. Our intention is to
better understand affective engagement, by creating both a new musical interface to
invite it, and a method to measure and monitor it. We are exploring the use of this
device and protocol in therapeutic settings in which mood recognition and regulation
are a primary goal.
October 2015
Page 51
248. BioGlass:
Physiological
Parameter Estimation
Using a
Head-Mounted
Wearable Device
Rosalind W. Picard, Javier Hernandez Rivera, James M. Rehg (Georgia Tech) and
Yin Li (Georgia Tech)
249. BioInsights:
Extracting Personal
Data from Wearable
Motion Sensors
Page 52
What if you could see what calms you down or increases your stress as you go through
your day? What if you could see clearly what is causing these changes for your child or
another loved one? People could become better at accurately interpreting and
communicating their feelings, and better at understanding the needs of those they love.
This work explores the possibility of using sensors embedded in Google Glass, a
head-mounted-wearable device, to robustly measure physiological signals of the
wearer.
Wearable devices are increasingly in long-term close contact with the body, giving them
the potential to capture sensitive, unexpected, and surprising personal data. For
instance, we have recently demonstrated that motion sensors embedded in a
head-mounted wearable device like Google Glass can capture the heart rate and
respiration rate from subtle motions of the head. We are examining additional
signatures of information that can be read from motion sensors in wearable devices: for
example, can a person's identity be validated from their subtle physiological motions,
especially those related to their cardiorespiratory information? How robust are these
motion-signatures to identifying a wearer, even when undergoing changes in posture,
stress, and activity?
Most wrist-wearable smart watches and fitness bands include motion sensors;
however, their use is limited to estimating physical activities such as tracking the
number of steps when walking or jogging. This project explores how we can process
subtle motion information from the wrist to measure cardiac and respiratory activity. In
particular we study the following research questions: How can we use the currently
available motion sensors within wrist-worn devices to accurately estimate heart rate
and breathing rate? How do the wrist-worn estimates compare to traditional sensors
and to state-of-the-art wearable physiological sensors? Does combining measurements
from motion and traditional methods improve performance? How well do the proposed
methods perform in daily life situations to provide unobtrusive physiological
assessments?
Working with the LEGO Group and Hasbro, we looked at the emotional experience of
playing with games and LEGO bricks. We measured participants' skin conductance as
they learned to play with these new toys. By marking the stressful moments we were
able to see what moments in learning should be redesigned. Our findings suggest that
framing is key: how can we help children recognize their achievements? We also saw
how children are excited to take on new responsibilities but are then quickly
discouraged when they aren't given the resources to succeed. Our hope for this work is
that by using skin conductance sensors, we can help companies better understand the
unique perspective of children and build experiences fit for them.
October 2015
We explore advanced machine learning and reflective user interfaces to scale the
national Crisis Text Line. We are using state-of-the-art probabilistic graphical topic
models and visualizations to help a mental health counselor extract patterns of mental
health issues experienced by participants, and bring large-scale data science to
understanding the distribution of mental health issues in the United States.
Yadid Ayzenberg
Complex and expensive medical devices are mainly used in medical facilities by health
professionals. IDA is an attempt to disrupt this paradigm and introduce a new type of
device: easy to use, low cost, and open source. It is a digital stethoscope that can be
connected to the Internet for streaming physiological data to remote clinicians.
Designed to be fabricated anywhere in the world with minimal equipment, it can be
operated by individuals without medical training.
October 2015
Page 53
Weixuan 'Vincent' Chen, Javier Hernandez Rivera, Akane Sano and Rosalind W.
Picard
This study aims to bring objective measurement to the multiple "pulse" and "pulse-like"
measures made by practitioners of Traditional Chinese Medicine (TCM). The measures
are traditionally made by manually palpitating the patient's inner wrist in multiple places,
and relating the sensed responses to various medical conditions. Our project brings
several new kinds of objective measurement to this practice, compares their efficacy,
and examines the connection of the measured data to various other measures of health
and stress. Our approach includes the possibility of building a smartwatch application
that can analyze stress and health information from the point of view of TCM.
258. Lensing:
Cardiolinguistics for
Atypical Angina
Page 54
Receiving a shot or discussing health problems can be stressful, but does not always
have to be. We measure participants' skin conductance as they use medical devices or
visit hospitals and note times when stress occurs. We then prototype possible solutions
and record how the emotional experience changes. We hope work like this will help
bring the medical community closer to their customers.
October 2015
262. Mobisensus:
Predicting Your
Stress/Mood from
Mobile Sensor Data
263. Modulating
Peripheral and
Cortical Arousal
Using a Musical
Motor Response Task
NEW LISTING
Can we recognize stress, mood, and health conditions from wearable sensors and
mobile-phone usage data? We analyze long-term, multi-modal physiological,
behavioral, and social data (electrodermal activity, skin temperature, accelerometer,
phone usage, social network patterns) in daily lives with wearable sensors and mobile
phones to extract bio-markers related to health conditions, interpret inter-individual
differences, and develop systems to keep people healthy.
We are conducting EEG studies to identify the musical features and musical interaction
patterns that universally impact measures of arousal. We hypothesize that we can
induce states of high and low arousal using electrodermal activity (EDA) biofeedback,
and that these states will produce correlated differences in concurrently recorded skin
conductance and EEG data, establishing a connection between peripherally recorded
physiological arousal and cortical arousal as revealed in EEG. We also hypothesize
that manipulation of musical features of a computer-generated musical stimulus track
will produce changes in peripheral and cortical arousal. These musical stimuli and
programmed interactions may be incorporated into music technology therapy, designed
to reduce arousal or increase learning capability by increasing attention. We aim to
provide a framework for the neural basis of emotion-cognition integration of learning
that may shed light on education and possible applications to improve learning by
emotion regulation.
265. Panoply
Current methods to assess depression and then ultimately select appropriate treatment
have many limitations. They are usually based on having a clinician rate scales, which
were developed in the 1960s. Their main drawbacks are lack of objectivity, being
symptom-based and not preventative, and requiring accurate communication. This work
explores new technology to assess depression, including its increase or decrease, in an
automatic, more objective, pre-symptomatic, and cost-effective way using wearable
sensors and smart phones for 24/7 monitoring of different personal parameters such as
physiological data, voice characteristics, sleep, and social interaction. We aim to enable
early diagnosis of depression, prevention of depression, assessment of depression for
people who cannot communicate, better assignment of a treatment, early detection of
treatment remission and response, and anticipation of post-treatment relapse or
recovery.
Panoply is a crowdsourcing application for mental health and emotional wellbeing. The
platform offers a novel approach to computer-based psychotherapy, one that is
optimized for accessibility, engagement, and therapeutic efficacy. A three-week
randomized-controlled trial with 166 participants compared Panoply to an active control
task (online expressive writing). Panoply conferred greater or equal benefits for nearly
every therapeutic outcome measure. Panoply also significantly outperformed the
control task on all measures of engagement.
266. PongCam
October 2015
Page 55
268. Real-Time
Assessment of
Suicidal Thoughts
and Behaviors
269. SmileTracker
Depression correlated with anxiety is one of the key factors leading to suicidal behavior,
and is among the leading causes of death worldwide. Despite the scope and
seriousness of suicidal thoughts and behaviors, we know surprisingly little about what
suicidal thoughts look like in nature (e.g., How frequent, intense, and persistent are they
among those who have them? What cognitive, affective/physiological, behavioral, and
social factors trigger their occurrence?). The reason for this lack of information is that
historically researchers have used retrospective self-report to measure suicidal
thoughts, and have lacked the tools to measure them as they naturally occur. In this
work we explore use of wearable devices and smartphones to identify behavioral,
affective, and physiological predictors of suicidal thoughts and behaviors.
Page 56
October 2015
Akane Sano, Amy Yu, Sara Taylor, Cesar Hidalgo and Rosalind Picard
The SNAPSHOT study seeks to measure Sleep, Networks, Affect, Performance,
Stress, and Health using Objective Techniques. It is a NIH-funded collaborative
research project between the Affective Computing group, Macro Connections group,
and Harvard Medical School's Brigham & Women's hospital. We have been running this
study since fall 2013 to collect one month of data from 50 MIT undergraduate students
who are socially connected every semester. We have collected data from about 170
participants, totaling over 5,000 days of data. We measure physiological, behavioral,
environmental, and social data using mobile phones, wearable sensors, surveys, and
lab studies. We investigate how daily behaviors and social connectivity influence sleep
behaviors, health, and outcomes such as mood, stress, and academic performance.
Using this multimodal data, we are developing models to predict onsets of sadness and
stress. This study will provide insights into behavioral choices for wellbeing and
performance.
272. StoryScape
274. Tributary
October 2015
Page 57
277. Wavelet-Based
Motion Artifact
Removal for
Electrodermal
Activity
Weixuan 'Vincent' Chen, Natasha Jaques, Sara Taylor, Akane Sano, Szymon
Fedor and Rosalind W. Picard
NEW LISTING
Electrodermal activity (EDA) recording is a powerful, widely used tool for monitoring
psychological or physiological arousal. However, analysis of EDA is hampered by its
sensitivity to motion artifacts. We propose a method for removing motion artifacts from
EDA, measured as skin conductance (SC), using a stationary wavelet transform (SWT).
We modeled the wavelet coefficients as a Gaussian mixture distribution corresponding
to the underlying skin conductance level (SCL) and skin conductance responses
(SCRs). The goodness-of-fit of the model was validated on ambulatory SC data. We
evaluated the proposed method in comparison with three previous approaches. Our
method achieved a greater reduction of artifacts while retaining motion-artifact-free
data.
278. Autonomous
Vehicles Need
Experimental Ethics:
Are We Ready for
Utilitarian Cars?
NEW LISTING
279. Crowdsourcing a
Manhunt
NEW LISTING
280. Crowdsourcing
Under Attack
NEW LISTING
Page 58
October 2015
Iyad Rahwan, Lorenzo Coviello, Morgan Frank, Lijun Sun, Manuel Cebrian and
NICTA
The Honest Crowds project addresses shortcomings of traditional survey techniques in
the modern information and big data age. Web survey platforms, such as Amazon's
Mechanical Turk and CrowdFlower, bring together millions of surveys and millions of
survey participants, which means paying a flat rate for each completed survey may lead
to survey responses that lack desirable care and forethought. Rather than allowing
survey takers to maximize their reward by completing as many surveys as possible, we
demonstrate how strategic incentives can be used to actually reward information and
honesty rather than just participation. The incentive structures that we propose provide
scalable solutions for the new paradigm of survey and active data collection.
282. 6D Display
283. A Switchable
Light-Field Camera
284. Bokode:
Imperceptible Visual
Tags for
Camera-Based
Interaction from a
Distance
Ramesh Raskar, Ankit Mohan, Grace Woo, Shinsaku Hiura and Quinn Smithwick
With over a billion people carrying camera-phones worldwide, we have a new
opportunity to upgrade the classic bar code to encourage a flexible interface between
the machine world and the human world. Current bar codes must be read within a short
range and the codes occupy valuable space on products. We present a new, low-cost,
passive optical design so that bar codes can be shrunk to fewer than 3mm and can be
read by unmodified ordinary cameras several meters away.
October 2015
Page 59
Ramesh Raskar, Vitor Pamplona, Erick Passos, Jan Zizka, Jason Boggess, David
Schafran, Manuel M. Oliveira, Everett Lawson, and Estebam Clua
288. Compressive
Light-Field Camera:
Next Generation in 3D
Photography
289. Eyeglasses-Free
Displays
Page 60
Millions of people worldwide need glasses or contact lenses to see or read properly.
We introduce a computational display technology that predistorts the presented content
for an observer, so that the target image is perceived without the need for eyewear. We
demonstrate a low-cost prototype that can correct myopia, hyperopia, astigmatism, and
even higher-order aberrations that are difficult to correct with glasses.
October 2015
The use of fluorescent probes and the recovery of their lifetimes allow for significant
advances in many imaging systems, in particular medical imaging systems. Here, we
propose and experimentally demonstrate reconstructing the locations and lifetimes of
fluorescent markers hidden behind a turbid layer. This opens the door to various
applications for non-invasive diagnosis, analysis, flowmetry, and inspection. The
method is based on a time-resolved measurement which captures information about
both fluorescence lifetime and spatial position of the probes. To reconstruct the scene,
the method relies on a sparse optimization framework to invert time-resolved
measurements. This wide-angle technique does not rely on coherence, and does not
require the probes to be directly in line of sight of the camera, making it potentially
suitable for long-range imaging.
With networked cameras in everyone's pockets, we are exploring the practical and
creative possibilities of public imaging. LensChat allows cameras to communicate with
each other using trusted optical communications, allowing users to share photos with a
friend by taking pictures of each other, or borrow the perspective and abilities of many
cameras.
Using a femtosecond laser and a camera with a time resolution of about one trillion
frames per second, we recover objects hidden out of sight. We measure speed-of-light
timing information of light scattered by the hidden objects via diffuse surfaces in the
scene. The object data are mixed up and are difficult to decode using traditional
cameras. We combine this "time-resolved" information with novel reconstruction
algorithms to untangle image information and demonstrate the ability to look around
corners.
Alumni Contributors: Andreas Velten, Otkrist Gupta and Di Wu
October 2015
Page 61
Vitor Pamplona, Manuel Oliveira, Erick Passos, Ankit Mohan, David Schafran,
Jason Boggess and Ramesh Raskar
Can a person look at a portable display, click on a few buttons, and recover his
refractive condition? Our optometry solution combines inexpensive optical elements
and interactive software components to create a new optometry device suitable for
developing countries. The technology allows for early, extremely low-cost, mobile, fast,
and automated diagnosis of the most common refractive eye disorders: myopia
(nearsightedness), hypermetropia (farsightedness), astigmatism, and presbyopia
(age-related visual impairment). The patient overlaps lines in up to eight meridians and
the Android app computes the prescription. The average accuracy is comparable to the
prior art and in some cases, even better. We propose the use of our technology as a
self-evaluation tool for use in homes, schools, and at health centers in developing
countries, and in places where an optometrist is not available or is too expensive.
298. PhotoCloud:
Personal to Shared
Moments with Angled
Graphs of Pictures
We introduce polarization field displays as an optically efficient design for dynamic light
field display using multi-layered LCDs. Such displays consist of a stacked set of liquid
crystal panels with a single pair of crossed linear polarizers. Each layer is modeled as a
spatially controllable polarization rotator, as opposed to a conventional spatial light
modulator that directly attenuates light. We demonstrate that such displays can be
controlled, at interactive refresh rates, by adopting the SART algorithm to
tomographically solve for the optimal spatially varying polarization state rotations
applied by each layer. We validate our design by constructing a prototype using
modified off-the-shelf panels. We demonstrate interactive display using a GPU-based
SART implementation supporting both polarization-based and attenuation-based
architectures.
Everett Lawson, Jason Boggess, Alex Olwal, Gordon Wetzstein, and Siddharth
Khullar
The major challenge in preventing blindness is identifying patients and bringing them to
specialty care. Diseases that affect the retina, the image sensor in the human eye, are
particularly challenging to address, because they require highly trained eye specialists
(ophthalmologists) who use expensive equipment to visualize the inner parts of the eye.
Diabetic retinopathy, HIV/AIDS-related retinitis, and age-related macular degeneration
Page 62
October 2015
are three conditions that can be screened and diagnosed to prevent blindness caused
by damage to retina. We exploit a combination of two novel ideas to simplify the
constraints of traditional devices, with simplified optics and cleaver illumination in order
to capture and visualize images of the retina in a standalone device easily operated by
the user. Prototypes are conveniently embedded in either a mobile hand-held retinal
camera, or wearable eyeglasses.
301. Reflectance
Acquisition Using
Ultrafast Imaging
Jaewon Kim
We present a new method for scanning 3D objects in a single-shot, shadow-based
method. We decouple 3D occluders from 4D illumination using shield fields: the 4D
attenuation function which acts on any light field incident on an occluder. We then
analyze occluder reconstruction from cast shadows, leading to a single-shot light field
camera for visual hull reconstruction.
305. Single-Photon
Sensitive Ultrafast
Imaging
Within the last few years, cellphone subscriptions have spread widely and now cover
even the remotest parts of the planet. Adequate access to healthcare, however, is not
widely available, especially in developing countries. We propose a new approach to
converting cellphones into low-cost scientific devices for microscopy. Cellphone
microscopes have the potential to revolutionize health-related screening and analysis
for a variety of applications, including blood and water tests. Our optical system is more
flexible than previously proposed mobile microscopes, and allows for wide field-of-view
panoramic imaging, the acquisition of parallax, and coded background illumination,
which optically enhances the contrast of transparent and refractive specimens.
The ability to record images with extreme temporal resolution enables a diverse range
of applications, such as time-of-flight depth imaging and characterization of ultrafast
processes. Here we present a demonstration of the potential of single-photon detector
arrays for visualization and rapid characterization of events evolving on picosecond
time scales. The single-photon sensitivity, temporal resolution, and full-field imaging
capability enables the observation of light-in-flight in air, as well as the measurement of
laser-induced plasma formation and dynamics in its natural environment. The extreme
sensitivity and short acquisition times pave the way for real-time imaging of ultrafast
processes or visualization and tracking of objects hidden from view.
Skin and tissue perfusion measurements are important parameters for diagnosis of
wounds and burns, and for monitoring plastic and reconstructive surgeries. In this
project, we use a standard camera and a laser source in order to image blood-flow
October 2015
Page 63
speed in skin tissue. We show results of blood-flow maps of hands, arms, and fingers.
We combine the complex scattering of laser light from blood with computational
techniques found in computer science.
Alumni Contributor: Christopher Barsi
Daniel Saakes, Kevin Chiu, Tyler Hutchison, Biyeun Buczyk, Naoya Koizumi and
Masahiko Inami
How can we show our 16-megapixel photos from our latest trip on a digital display?
How can we create screens that are visible in direct sunlight as well as complete
darkness? How can we create large displays that consume less than 2W of power?
How can we create design tools for digital decal application and intuitive-computer
aided modeling? We introduce a display that is high resolution but updates at a low
frame rate, a slow display. We use lasers and monostable light-reactive materials to
provide programmable space-time resolution. This refreshable, high resolution display
exploits the time decay of monostable materials, making it attractive in terms of cost
and power requirements. Our effort to repurpose these materials involves solving
underlying problems in color reproduction, day-night visibility, and optimal time
sequences for updating content.
308. SpeckleSense
309. SpecTrans:
Classification of
Transparent Materials
and Interactions
Munehiko Sato, Alex Olwal, Boxin Shi, Shigeo Yoshida, Atsushi Hiyama,
Michitaka Hirose and Tomohiro Tanikawa, Ramesh Raskar
310. StreetScore
Page 64
October 2015
313. Time-of-Flight
Microwave Camera
NEW LISTING
This work focuses on bringing powerful concepts from wave optics to the creation of
new algorithms and applications for computer vision and graphics. Specifically,
ray-based, 4D lightfield representation, based on simple 3D geometric principles, has
led to a range of new applications that include digital refocusing, depth estimation,
synthetic aperture, and glare reduction within a camera or using an array of cameras.
The lightfield representation, however, is inadequate to describe interactions with
diffractive or phase-sensitive optical elements. Therefore we use Fourier optics
principles to represent wavefronts with additional phase information. We introduce a
key modification to the ray-based model to support modeling of wave phenomena. The
two key ideas are "negative radiance" and a "virtual light projector." This involves
exploiting higher dimensional representation of light transport.
Our architecture takes a hybrid approach to microwaves and treats them like waves of
light. Most other work places antennas in a 2D arrangement to directly sample the RF
reflections that return. Instead of placing antennas in a 2D arrangment, we use a single,
passive, parabolic reflector (dish) as a lens. You can think of every point on that dish as
an antenna with a fixed phase-offset. This means that the lens acts as a fixed set of 2D
antennas which are very dense and spaced across a large aperture. We then sample
the focal-plane of that lens. This architecture makes it possible for us to capture higher
resolution images at a lower cost.
Guy Satat, Barmak Heshmat, Dan Raviv and Ramesh Raskar
A new method to detect and distinguish between different types of fluorescent
materials. The suggested technique has provided a dramatically larger depth range
compared to previous methods; thus it enables medical diagnosis of body tissues
without removing the tissue from the body, which is the current medical standard. It
uses fluorescent probes, which are commonly used in medical diagnosis. One of these
parameters is the fluorescence lifetime, that is the average time the fluorescence
emission lasts. The new method can distinguish between different fluorescence
lifetimes, which allows diagnosis of deep tissues. Locating fluorescence probes in the
body using this method can, for example, indicate the location of a tumor in deep
tissue, and classify it as malignant or benign according to the fluorescence lifetime, thus
eliminating the need for X-ray or biopsy.
Alumni Contributor: Christopher Barsi
Andreas Velten, Di Wu, Adrin Jarabo, Belen Masia, Christopher Barsi, Chinmaya
Joshi, Everett Lawson, Moungi Bawendi, Diego Gutierrez, and Ramesh Raskar
We have developed a camera system that captures movies at an effective rate of
approximately one trillion frames per second. In one frame of our movie, light moves
only about 0.6 mm. We can observe pulses of light as they propagate through a scene.
We use this information to understand how light propagation affects image formation
and to learn things about a scene that are invisible to a regular camera.
October 2015
Page 65
316. Ultrasound
Tomography
Ramesh Raskar, Boxin Shi, Hang Zhao, Christy Fernandez-Cull and Sai-Kit Yeung
NEW LISTING
Traditional medical ultrasound assumes that we are imaging ideal liquids. We are
interested in imaging muscle and bone as well as measuring elastic properties of
tissues, all of which are places where this assumption fails quite miserably. Interested
in cancer detections, Duchenne muscular dystrophy, and prosthetic fitting, we use
tomographic techniques as well as ideas from seismic imaging to deal with these
issues.
319. VisionBlocks
Hyowon Lee, Nikhil Naik, Lubos Omelina, Daniel Tokunaga, Tiago Lucena and
Ramesh Raskar
We are creating a novel visual lifelogging framework for applications in personal life and
workplaces.
Page 66
October 2015
Hal Abelson, Eric Klopfer, Mitchel Resnick, Andrew McKinney, CSAIL and
Scheller Teacher Education Program
App Inventor is an open-source tool that democratizes app creation. By combining
LEGO-like blocks onscreen, even users with no prior programming experience can use
App Inventor to create their own mobile applications. Currently, App Inventor has over
2,000,000 users and is being taught by universities, schools, and community centers
worldwide. In those initiatives, students not only acquire important technology skills
such as computer programming, but also have the opportunity to apply computational
thinking concepts to many fields including science, health, education, business, social
action, entertainment, and the arts. Work on App Inventor was initiated in Google
Research by Hal Abelson and is continuing at the MIT Media Lab as part of its Center
for Mobile Learning, a collaboration with the MIT Computer Science and Artificial
Intelligence Laboratory (CSAIL) and the Scheller Teacher Education Program (STEP).
322. Askii
October 2015
Page 67
Page 68
October 2015
centers, and families in learning practices that are more hands-on and centered on
students' interests and ideas. For additional information, please contact
[email protected].
331. Libranet
Improving adult learning, especially for adults who are unemployed or unable to
financially support their families, is a challenge that affects the future wellbeing of
millions of individuals in the US. We are working with the Joyce Foundation, employers,
learning researchers, and the Media Lab community to prototype three to five new
models for adult learning that involve technology innovation and behavioral insights.
Philipp Schmidt, Juliana Nazare, Katherine McConachie, Srishti Sethi, and Guy
Zyskind
The Media Lab will award certificates to members of our community that are outside of
the academic program. A project from the Learning Learning initiative, the certificates
are registered on the blockchain, cryptographically signed, and tamper proof. These
certificates can be designed to represent different contributions or recognition. What
they stand for is included in the certificate. Through these objects we will critically
explore notions of social capital and reputation, empathy and gift economies, and social
behavior. We are also developing a blueprint/model for other organizations to start
doing the same. The code is open source so that others can experiment with the idea of
digital certificates. Those certificates would have no connection to the Media Lab.
Media Lab Virtual Visit is intended to open up the doors of the Media Lab to people
from all around the world. The visit is hosted on the Unhangout platform, a new way of
running large-scale unconferences on the web that was developed at the Media Lab. It
is an opportunity for students or potential collaborators to talk with current researchers
at the Lab, learn about their work, and share ideas.
Learning for everyone, by everyone. The Open Learning project builds online learning
communities that work like the web: peer-to-peer, loosely joined, open. It works with
Media Lab faculty and students to open up the magic of the Lab through online
learning. Our first experiment was Learning Creative Learning, a course taught at the
Media Lab, which attracted 24,000 participants. We are currently developing ideas for
massive citizen science projects, engineering competitions for kids, and new physical
infrastructures for learning that reclaim the library.
October 2015
Page 69
337. Para
Jennifer Jacobs, Mitchel Resnick, Joel Brandt, Sumit Gogia, and Radomir Mech
Procedural representations, enabled through programming, are a powerful tool for
digital illustration, but writing code conflicts with the intuitiveness and immediacy of
direct manipulation. Para is a digital illustration tool that uses direct manipulation to
define and edit procedural artwork. Through creating and altering vector paths, artists
can define iterative distributions, parametric constraints, and conditional behaviors.
Para makes it easier for people to create generative artwork, and creates an intuitive
workflow between manual and procedural drawing methods.
339. Scratch
Page 70
October 2015
Saskia Leggett, Lisa O'Brien, Kasia Chmielinski, Carl Bowman, and Mitchel
Resnick
Scratch Day (day.scratch.mit.edu) is a network of face-to-face local gatherings, on the
same day in all parts of the world, where people can meet, share, and learn more about
Scratch, a programming environment that enables people to create their own interactive
stories, games, animations, and simulations. We believe that these types of
face-to-face interactions remain essential for ensuring the accessibility and
sustainability of initiatives such as Scratch. In-person interactions enable richer forms of
communication among individuals, more rapid iteration of ideas, and a deeper sense of
belonging and participation in a community. The first Scratch Day took place in 2009. In
2014, there were 260 events in 56 countries.
Alumni Contributor: Karen Brennan
343. ScratchJr
Mitchel Resnick, Marina Bers, Chris Garrity, Tim Mickel, Paula Bonta, and Brian
Silverman
ScratchJr makes coding accessible to younger children (ages 5-7), enabling them to
program their own interactive stories, games, and animations. To make ScratchJr
developmentally appropriate for younger children, we revised the interface and
provided new structures to help young children learn relevant math concepts and
problem-solving strategies. ScratchJr is available as a free app for iPads and Android.
ScratchJr is a collaboration between the MIT Media Lab, Tufts University, and Playful
Invention Company.
Alumni Contributors: Sayamindu Dasgupta and Champika Fernando
344. Spin
Alisha Panjwani, Natalie Rusk, Jie Qi, Chris Garrity, Tiffany Tseng, Jennifer
Jacobs, Mitchel Resnick
The Lifelong Kindergarten group is collaborating with the Museum of Science in Boston
to develop materials and workshops that engage young people in "maker" activities in
Computer Clubhouses around the world, with support from Intel. The activities
introduce youth to the basics of circuitry, coding, crafting, and engineering. In addition,
graduate students are testing new maker technologies and workshops for Clubhouse
staff and youth. The goal of the initiative is to help young people from under-served
communities gain experience and confidence in their ability to design, create, and
invent with new technologies.
Alumni Contributors: David A. Mellis and Ricarose Roque
October 2015
Page 71
346. Unhangout
Philipp Schmidt, Drew Harry, Charlie DeTar, Srishti Sethi, and Katherine
McConachie
Unhangout is an open-source platform for running large-scale unconferences online.
We use Google Hangouts to create as many small sessions as needed, and help users
find others with shared interests. Think of it as a classroom with an infinite number of
breakout sessions. Each event has a landing page, which we call the lobby. When
participants arrive, they can see who else is there and chat with each other. The hosts
can do a video welcome and introduction that gets streamed into the lobby. Participants
then break out into smaller sessions (up to 10 people per session) for in-depth
conversations, peer-to-peer learning, and collaboration on projects. Unhangouts are
community-based learning instead of top-down information transfer.
Page 72
October 2015
350. Responsive
Communities: Pilot
Project in Jun, Spain
To gain insights into how digital technologies can make local governments more
responsive and deepen citizen engagement, we are studying the Spanish town of Jun
(population 3,500). For the last four years, Jun has been using Twitter as its principal
medium for citizen-government communication. We are mapping the resulting social
networks and analyzing the dynamics of the Twitter interactions, in order to better
understand the initiative's impact on the town. Our long-term goal is to determine
whether the system can be replicated at scale in larger communities, perhaps even
major cities.
While there are a number of literacy technology solutions developed for individuals, the
role of "social machines"--or social literacy learning--is less explored. We believe that
literacy is an inherently social activity that is best learned within a supportive community
network including peers, teachers and parents. By creating technology that is
child-driven, machine-guided, we hope to empower human learning networks in order
to establish a nutrient medium for literacy growth while enhancing personal, creative,
and autonomous interactions within communities. We are planning to pilot and deploy
our solutions in two environments: (1) Boston: We are planning to create a cross-age
peer mentorship program to engage students from different communities in socially
collaborative, self-expressive literacy learning opportunities via mobile devices. (2)
India: We are planning to use mobile devices to create hyperlocal learning networks in
rural areas where the need for literacy is particularly acute.
Alumni Contributor: Prashanth Vijayaraghavan
Deb Roy, Russell Stevens, Soroush Vosoughi, William Powers, Sophie Chou,
Perng-Hwa Kung, Neo (Mostafa) Mohsenvand, Raphael Schaad and Prashanth
Vijayaraghavan
The Electome project is a comprehensive mapping of the content and network
connections among the campaign's three core public sphere voices: candidates (and
their advocates), media (journalists and other mainstream media sources), and the
public. This mapping is used to trace the election's narratives as they form, spread,
morph, and decline among these three groups -identifying who and what influences
these dynamics. We are also developing metrics that measure narrative alignment and
misalignment among the groups, sub-groups (political party, media channel, advocacy
group, etc.), and specific individuals/organizationss (officials, outlets, journalists,
influencers, sources, etc.). The Electome can be used to promote more responsive
elections by deploying analyses, metrics, and data samples that improve the exchange
of ideas among candidates, the media, and/or the public in the public sphere of an
election.
October 2015
Page 73
354. Activ8
355. Amphibian:
Terrestrial Scuba
Diving Using Virtual
Reality
NEW LISTING
Harpreet Sareen, Jingru Guo, Misha Sra, Chris Schmandt, Dhruv Jain, Raymond
Wu and Rodrigo Victor De Melo Marques
Oceans are home to more biodiversity than anywhere else on this planet. Scuba diving
as a sport has helped us explore beautiful corals, striking fishes, and magnificent
wrecks. However, ocean diving is mentally and physically challenging. Efforts in
research literature have virtually simulated maritime scenes in comfortable
environments. Most of these simulations are only virtual projections on a computer
screen or enclosed walls, so a realistic feeling of diving is never achieved. Few
systems, however realistic, require people to swim in a pool or tank, which is
cumbersome and tedious. Amphibian is a virtual reality system that allows the user to
experience the ocean environment in a convenient terrestrial setting.
357. Meta-Physical-Space
VR
NEW LISTING
358. MugShots
For young children with hearing loss, learning is difficult and their social development is
often at risk. This physical impairment causes changes in their social behavior, and
they may be deemed "socially awkward" from a task as simple as setting down an
object. It is hard for them to sense the sound distinction to know if an object was set
down in a polite manner or if it was disruptive to others. Kaan is a wearable wristband
to signal and alert a person with hearing loss if the sound emitted from setting down an
object is disruptive. The long-term goal is to educate children on acceptable ranges of
motions of objects in their everyday lives. The current device consists of a wearable
wristband that vibrates and emits light if the loudness of the sound around it goes
beyond a certain threshold.
Experience new dimensions and worlds without limits with friends. Encounter a new
physical connection within the virtual world. Explore virtual spaces by physically
exploring the real world. Interact with virtual objects by physically interacting with
real-world objects. Initial physical sensations include touching objects, structures, and
people while we work on adding sensations for feeling pressure, temperature, moisture,
smell, and other sensory experiences.
Cindy Hsin-Liu Kao and Chris Schmandt
MugShots enables visual communication though everyday objects. We embed a small
display into a coffee mug, an object with frequent daily use. Targeted for the workplace,
the mug transitions between different communication modes in public and private
spaces. In the private office space, the mug is an object for intimate communication
between remote friends; users receive emoticon stickers via the display. When brought
to a public area, the mug switches to a pre-selected image of the user's choice, serving
as a social catalyst to trigger conversations in public spaces.
Page 74
October 2015
359. NailO
360. OnTheGo
361. Spotz
October 2015
Page 75
Kevin Slavin, Julie Legault, Taylor Levy, Che-Wei Wang, Dalai Lama Center for
Ethics and Transformative Values and Tinsley Galyean
20 Day Stranger is a mobile app that creates an intimate and anonymous connection
between you and another person. For 20 days, you get continuous updates about
where they are, what they are doing, and eventually even how they are feeling, and
them likewise about you. But you will never know who this person is. Does this change
the way you think about other people you see throughout your day, any one of which
could be your stranger?
367. AutomaTiles
NEW LISTING
The crystal oscillator inside a quartz wristwatch vibrates at 32,768 times per second.
This is too fast for a human to perceive, and it's even more difficult to imagine its
interaction with the mechanical circulation of a clock. 32,768 Times Per Second is a
diagrammatic, procedural, and fully functional sculpture of the electro-mechanical
landscape inside a common wristwatch. Through a series of electronic transformations,
the signal from a crystal is broken down over and over, and then built back up to the
human sense of time.
A tabletop set of cellular automata ready to exhibit complex systems through simple
behaviors, AutomaTiles explores emergent behavior through tangible objects.
Individually they live as simple organisms, imbued with a simple personality; together
they exhibit something "other" than the sum of their parts. Through communication with
their neighbors, complex interactions arise. What will you discover with AutomaTiles?
Kevin Slavin and Taylor Levy
Sculptural artifacts that model and reveal the embedded history of human thought and
scientific principles hidden inside banal digital technologies. These artifacts provide
alternative ways to engage and understand the deepest interior of our everyday
devices, below the circuit, below the chip. They build a sense of the machines within
the machine, the material, the grit of computation.
Gregory Borenstein
Case and Molly is a prototype for a game inspired by (and in homage to) William
Gibson's novel Neuromancer. It's about the coordination between virtual and physical,
"cyberspace" and "meat." We navigate the tension between our physical surroundings
and our digital networks in a state of continuous partial attention; Case and Molly uses
the mechanics and aesthetics of Neuromancer to explore this quintessential
contemporary dynamic. The game is played by two people mediated by smartphones
and an Oculus Rift VR headset. Together, and under time pressure, they must navigate
Molly through physical space using information that is only available to Case. In the
game, Case sees Molly's point of view in immersive 3D, but he can only communicate a
single bit of information to her. Meanwhile, Molly traverses physical obstacles hoping
Case can solve abstract puzzles in order to gain access to the information she needs.
Page 76
October 2015
Kevin Slavin
Named for, and inspired by, the medieval practice of erecting barriers to prevent the
spread of disease, Cordon Sanitaire is a collaborative, location-based mobile game in
which players seek to isolate an infectious "patient zero" from the larger population.
Every day, the game starts abruptly synchronizing all players at once and lasts for two
minutes. In 60 seconds, players must choose either to help form the front line of a
quarantine, or remain passive. Under pressure, the "uninfected" attempt to collaborate
without communication, seeking to find the best solution for the group. When those 60
seconds end, a certain number of players are trapped inside with patient zero, and the
score reflects the group's ability to cooperate under duress.
371. Darkball
Che-Wei Wang
Cristiano Ronaldo can famously volley a corner kick in total darkness. The magic
behind this remarkable feat is hidden in Ronaldo's brain, which enables him to use
advance cues to plan upcoming actions. Darkball challenges your brain to do the same,
distilling that scenario into its simplest form intercept a ball in the dark. All you see is all
you need.
372. DeepView:
Computational Tools
for Chess
Spectatorship
374. Dice++
Food offers a rich multi-modal experience that can deeply affect emotion and memory.
We're interested in exploring the artistic and expressive potential of food beyond mere
nourishment, as a means of creating memorable experiences that involve multiple
senses. For instance, music can change our eating experience by altering our emotions
during the meal, or by evoking a specific time and place. Similarly, sight, smell, and
temperature can all be manipulated to combine with food for expressive effect. In
addition, by drawing upon people's physiology and upbringing, we seek to create
individual, meaningful sensory experiences. Specifically, we are exploring the
connection between music and flavor perception.
Today, algorithms drive our cars, our economy, what we read, and how we play.
Modern-day computer games utilize weighted probabilities to make games more
competitive, fun, and addicting. In casinos, slot machines--once a product of simple
probability--employ similar algorithms to keep players playing. Dice++ takes the
seemingly straight probability of rolling a die and determines an outcome with
algorithms of its own.
October 2015
Page 77
375. EyeWire
Sebastian Seung, Kevin Slavin, Gregory Borenstein, Taylor Levy, David Robert,
Che-Wei Wang and Seung Lab (MIT BCS)
The Seung Lab at MIT's Brain + Cognitive Sciences Department has developed
EyeWire, a game to map the brain. To date, it has attracted an online community of
over 50,000 "citizen neuroscientists" who are mapping the 3D structure of neurons and
discovering neural connections. Playful Systems is collaborating with the Seung Lab to
reconsider EyeWire as a large scale mass-appeal mobile game to attract 1MM players
or more. We are currently developing mobile, collaborative game mechanics, and
shifting the focus to short-burst gameplay.
376. GAMR
NEW LISTING
377. Homeostasis
378. MicroPsi: An
Architecture for
Motivated Cognition
Joscha Bach
379. radiO_o
Kevin Slavin, Mark Feldmeier, Taylor Levy, Daniel Novy and Che-Wei Wang
The MicroPsi project explores broad models of cognition, built on a motivational system
that gives rise to autonomous social and cognitive behaviors. MicroPsi agents are
grounded AI agents, with neuro-symbolic representations, affect, top-down/bottom-up
perception, and autonomous decision making. We are interested in finding out how
motivation informs social interaction (cooperation and competition, communication and
deception), learning, and playing; shapes personality; and influences perception and
creative problem-solving.
Page 78
October 2015
382. Storyboards
Giving opaque technology a glass house, Storyboards present the tinkerers or owners
of electronic devices with stories of how their devices work. Just as the circuit board is a
story of star-crossed lovers Anode and Cathode with its cast of characters (resistor,
capacitor, transistor), Storyboards have their own characters driving a parallel visual
narrative.
383. Troxes
NEW LISTING
Jonathan Bobrow
The building blocks we grow up with and the coordinate systems we are introduced to
at an early age shape the design space with which we think. Complex systems are
difficult to understand because they often require transition from one coordinate system
to another. We could even begin to say that empathy is precisely this ability to map
easily to many different coordinates. Troxes is a building blocks kit based on the
triangle, where kids get to build their building blocks and then assemble Platonic and
Archimedean solids.
Tal Achituv, Catherine D'Ignazio, Alexis Hope, Taylor Levy, Alexandra Metral,
Che-Wei Wang
Action Path is a mobile app to help citizens provide meaningful feedback about
everyday spaces in their cities by inviting engagement when near current issues.
Existing platforms for civic engagement, whether online or offline, are inconvenient and
disconnected from the source of issues they are meant to address. Action Path
addresses barriers to effective civic engagement in community matters by ensuring
more citizens have their voices heard on how to improve their local
communities--converting individual actions into collective action and providing context
and a sense of efficacy, which can help citizens become more effective through regular
practice and feedback.
October 2015
Page 79
387. Code4Rights
Joy Buolamwini
NEW LISTING
The Civic Crowdfunding project is an initiative to collect data and advance social
research into the emerging field of civic crowdfunding, the use of online crowdfunding
platforms to provide services to communities. The project aims to bring together folks
from across disciplines and professions--from research and government to the tech
sector and community organizations--to talk about civic crowdfunding and its benefits,
challenges, and opportunities. It combines qualitative and quantitative research
methods, from analysis of the theory and history of crowdfunding to fieldwork-based
case studies and geographic analysis of the field.
390. DataBasic
NEW LISTING
Page 80
October 2015
391. DeepStream
NEW LISTING
Matthew Stempeck
Catherine D'Ignazio
The Internet has disrupted the aid sector like so many other industries before it. In
times of crisis, donors are increasingly connecting directly with affected populations to
provide participatory aid. The Digital Humanitarian Marketplace aggregates these digital
volunteering projects, organizing them by crisis and skills required to help coordinate
this promising new space.
Erase the Border is a web campaign and voice petition platform. It tells the story of the
Tohono O'odham people, whose community has been divided along 75 miles of the
US-Mexico border by a fence. The border fence divides the community, prevents tribe
members from receiving critical health services, and subjects O'odham to racism and
discrimination. This platform is a pilot that we are using to research the potential of
voice and media petitions for civic discourse.
395. FOLD
Willow Brugh
This checklist is designed to help projects that include an element of data collection to
develop appropriate consent policies and practices. The checklist can be especially
useful for projects that use digital or mobile tools to collect, store, or publish data, yet
understand the importance of seeking the informed consent of individuals involved (the
data subjects). This checklist does not address the additional considerations necessary
when obtaining the consent of groups or communities, nor how to approach consent in
situations where there is no connection to the data subject. This checklist is intended
for use by project coordinators, and can ground conversations with management and
project staff in order to identify risks and mitigation strategies during project design or
implementation. It should ideally be used with the input of data subjects.
October 2015
Page 81
Anurag Gupta, Erhardt Graeff, Huan Sun, Yu Wang and Ethan Zuckerman
Every country has a brand, negative or positive, and that brand is mediated in part by
its global press coverage. We are measuring and ranking the perceptions of the 20
most populous countries by crowdsourcing those perceptions through a "World News
Quiz." Quiz-takers match geographically vague news stories to the countries they think
they occurred in, revealing how they positively or negatively perceive them. By
illustrating the way these biases manifest among English and Chinese speakers, we
hope to help news consumers and producers be more aware of the incomplete
portrayals they have internalized and propagated.
Ethan Zuckerman, J. Nathan Matias, Matt Stempeck, Rahul Bhargava and Dan
Schultz
What have you seen in the news this week? And what did you miss? Are you getting
the blend of local, international, political, and sports stories you desire? We're building a
media-tracking platform to empower you, the individual, and news providers
themselves, to see what you're getting and what you're missing in your daily
consumption and production of media. The first round of modules developed for the
platform allow you to compare the breakdown of news topics and byline gender across
multiple news sources.
Page 82
October 2015
Edward L. Platt
Media Perspective brings a data visualization into 3D space. This data sculpture
represents mainstream media coverage of Net Neutrality over 15 months, during the
debate over the FCC's classification of broadband services. Each transparent pane
shows a slice in time, allowing users to physically move and look through the timeline.
The topics cutting through the panes show how attention shifted between aspects of the
debate over time.
405. NetStories
Pure networks and pure hierarchies both have distinct strengths and weaknesses.
These become glaringly apparent during disaster response. By combining these
modes, their strengths (predictability, accountability, appropriateness, adaptability) can
be optimized, and their weaknesses (fragility, inadequate resources) can be
compensated for. Bridging these two worlds is not merely a technical challenge, but
also a social issue.
Recent years have witnessed a surge in online digital storytelling tools, enabling users
to more easily create engaging multimedia narratives. Increasing Internet access and
powerful in-browser functionality have laid the foundation for the proliferation of new
online storytelling technologies, ranging from tools for creating interactive online videos
to tools for data visualization. While these tools may contribute to diversification of
online storytelling capacity, sifting through tools and understanding their respective
limitations and affordances poses a challenge to storytellers. The NetStories research
initiative explores emergent online storytelling tools and strategies through a
combination of analyzing tools, facilitating story-hack days, and creating an online
database of storytelling tools.
406. NewsPix
407. NGO2.0
October 2015
Page 83
Sasha Costanza-Chock, Becky Hurwitz, Heather Craig, Royal Morris, with support
from Rahul Bhargava, Ed Platt, Yu Wang
411. PageOneX
Ethan Zuckerman, Edward Platt, Rahul Bhargava and Pablo Rey Mazon
The Out for Change Transformative Media Organizing Project (OCTOP) links LGBTQ,
Two-Spirit, and allied media makers, online organizers, and tech-activists across the
United States. In 2013-2014, we are conducting a strengths/needs assessment of the
media and organizing capacity of the movement, as well as offering a series of
workshops and skillshares around transmedia organizing. The project is guided by a
core group of project partners and advisers who work with LGBTQ and Two-Spirit folks.
The project is supported by faculty and staff at the MIT Center for Civic Media,
Research Action Design and by the Ford Foundation's Advancing LGBT Rights
Initiative.
Newspaper front pages are a key source of data about our media ecology. Newsrooms
spend massive time and effort deciding what stories make it to the front page.
PageOneX makes coding and visualizing newspaper front page content much easier,
democratizing access to newspaper attention data. Communication researchers have
analyzed newspaper front pages for decades, using slow, laborious methods.
PageOneX simplifies, digitizes, and distributes the process across the net and makes it
available for researchers, citizens, and activists.
Page 84
October 2015
413. Readersourcing
Tal Achituv
Scanner Grabber is a digital police scanner that enables reporters to record, playback,
and export audio, as well as archive public safety radio (scanner) conversations. Like a
TiVo for scanners, it's an update on technology that has been stuck in the last century.
It's a great tool for newsrooms. For instance, a problem for reporters is missing the
beginning of an important police incident because they have stepped away from their
desk at the wrong time. Scanner Grabber solves this because conversations can be
played back. Also, snippets of exciting audio, for instance a police chase, can be
exported and embedded online. Reporters can listen to files while writing stories, or
listen to older conversations to get a more nuanced grasp of police practices or
long-term trouble spots. Editors and reporters can use the tool for collaborating, or
crowdsourcing/public collaboration.
Terra Incognita is a global news game and recommendation system. Terra Incognita
helps you discover interesting news and personal connections to cities that you haven't
read about. Whereas many recommendation systems connect you on the basis of
"similarity," Terra Incognita connects you to information on the basis of "serendipity."
Each time you open the application, Terra Incognita shows you a city that you have not
yet read about and gives you options for reading about it. Chelyabinsk (Russia),
Hiroshima (Japan), Hagta (Guam), and Dhaka (Bangladesh) are a few of the places
where you might end up.
The Babbling Brook is an unnamed neighborhood creek in Waltham, MA, that winds its
way to the Charles River. With the help of networked sensors and real-time processing,
the brook constantly tweets about the status of its water quality, including thoughts and
bad jokes about its own environmental and ontological condition. Currently, the
Babbling Brook senses temperature and depth and cross-references that information
with real-time weather data to come up with extremely bad comedy. Thanks to Brian
Mayton, the Responsive Environments group, and Tidmarsh Farms Living Observatory
for their support.
October 2015
Page 85
Leo Burd
VoIP Drupal is an innovative framework that brings the power of voice and
Internet-telephony to Drupal sites. It can be used to build hybrid applications that
combine regular touchtone phones, web, SMS, Twitter, IM, and other communication
tools in a variety of ways, facilitating community outreach and providing an online
presence to those who are illiterate or do not have regular access to computers. VoIP
Drupal will change the way you interact with Drupal, your phone, and the web.
420. Vojo.co
423. ZL Vortice
Daniel Paz de Araujo (UNICAMP), Ethan Zuckerman, Adeline Gabriela Silva Gil,
Hermes Renato Hildebrand (PUC-SP/UNICAMP) and Nelson Brissac Peixoto
(PUC-SP)
Mainstream media increasingly quote social media sources for breaking news. "Whose
Voices" tracks who's getting quoted across topics, showing just how citizen media
sources are influencing international news reporting.
This project is currently promoting a survey of data from the East Side (Zona Leste) of
the city of Sao Paulo, Brazil. The aim is to detect the landscape dynamics:
infrastructure and urban planning, critical landscapes, housing, productive territory,
recycling, and public space. The material will be made available on a digital platform,
accessible by computers and mobile devices: a tool specially developed to enable local
communities to disseminate productive and creative practices that occur in the area, as
well as to enable a greater participation in the formulation of public policies. ZL Vortice
is an instrument that will serve to strengthen social, productive, and cultural networks of
the region.
Page 86
October 2015