0% found this document useful (0 votes)
8 views

MIT MediaLabProjects

MIT MediaLabProjects

Uploaded by

sachin.anshuman
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

MIT MediaLabProjects

MIT MediaLabProjects

Uploaded by

sachin.anshuman
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 105

Media Lab Projects List

October 2012

MIT Media Lab


Buildings E14 / E15
75 Amherst Street
Cambridge, Massachusetts 02142

communications@media.mit.edu
http://www.media.mit.edu
Many of the MIT Media Lab research projects described in the following pages are conducted under the auspices of
sponsor-supported, interdisciplinary Media Lab centers, consortia, joint research programs, and initiatives. They are:

Autism & Communication Technology Initiative


The Autism & Communication Technology Initiative utilizes the unique features of the Media Lab to foster the development of
innovative technologies that can enhance and accelerate the pace of autism research and therapy. Researchers are
especially invested in creating technologies that promote communication and independent living by enabling non-autistic
people to understand the ways autistic people are trying to communicate; improving autistic people's ability to use receptive
and expressive language along with other means of functional, non-verbal expression; and providing telemetric support that
reduces reliance on caregivers' physical proximity, yet still enables enriching and natural connectivity as wanted and needed.

CE 2.0
Most of us are awash in consumer electronics (CE) devices: from cell phones, to TVs, to dishwashers. They provide us with
information, entertainment, and communications, and assist us in accomplishing our daily tasks. Unfortunately, most are not
as helpful as they could and should be; for the most part, they are dumb, unaware of us or our situations, and often difficult
to use. In addition, most CE devices cannot communicate with our other devices, even when such communication and
collaboration would be of great help. The Consumer Electronics 2.0 initiative (CE 2.0) is a collaboration between the Media
Lab and its sponsor companies to formulate the principles for a new generation of consumer electronics that are highly
connected, seamlessly interoperable, situation-aware, and radically simpler to use. Our goal is to show that as computing
and communication capability seep into more of our everyday devices, these devices do not have to become more confusing
and complex, but rather can become more intelligent in a cooperative and user-friendly way.

Center for Civic Media


Communities need information to make decisions and take action: to provide aid to neighbors in need, to purchase an
environmentally sustainable product and shun a wasteful one, to choose leaders on local and global scales. Communities
are also rich repositories of information and knowledge, and often develop their own innovative tools and practices for
information sharing. Existing systems to inform communities are changing rapidly, and new ecosystems are emerging where
old distinctions like writer/audience and journalist/amateur have collapsed. The Civic Media group is a partnership between
the MIT Media Lab and Comparative Media Studies at MIT. Together, we work to understand these new ecosystems and to
build tools and systems that help communities collect and share information and connect that information to action. We work
closely with communities to understand their needs and strengths, and to develop useful tools together using collaborative
design principles. We particularly focus on tools that can help amplify the voices of communities often excluded from the
digital public sphere and connect them with new audiences, as well as on systems that help us understand media ecologies,
augment civic participation, and foster digital inclusion.

Center for Future Storytelling


The Center for Future Storytelling at the Media Lab is rethinking storytelling for the 21st century. The Center takes a new and
dynamic approach to how we tell our stories, creating new methods, technologies, and learning programs that recognize and
respond to the changing communications landscape. The Center builds on the Media Lab's more than 25 years of
experience in developing society-changing technologies for human expression and interactivity. By applying leading-edge
technologies to make stories more interactive, improvisational, and social, researchers are working to transform audiences
into active participants in the storytelling process, bridging the real and virtual worlds, and allowing everyone to make and
share their own unique stories. Research also explores ways to revolutionize imaging and display technologies, including
developing next-generation cameras and programmable studios, making movie production more versatile and economic.

Center for Mobile Learning


The Center for Mobile Learning invents and studies new mobile technologies to promote learning anywhere anytime for
anyone. The Center focuses on mobile tools that empower learners to think creatively, collaborate broadly, and develop
applications that are useful to themselves and others around them. The Center's work covers location-aware learning
applications, mobile sensing and data collection, augmented reality gaming, and other educational uses of mobile
technologies. The Center’s first major activity will focus on App Inventor, a programming system that makes it easy for
learners to create mobile apps by fitting together puzzle piece-shaped “blocks” in a web browser.

The most current information about our research is available on the MIT Media Lab Web site, at
http://www.media.mit.edu/research/.

MIT Media Lab October 2012 Page i


Communications Futures Program
The Communications Futures Program conducts research on industry dynamics, technology opportunities, and regulatory
issues that form the basis for communications endeavors of all kinds, from telephony to RFID tags. The program operates
through a series of working groups led jointly by MIT researchers and industry collaborators. It is highly participatory, and its
agenda reflects the interests of member companies that include both traditional stakeholders and innovators. It is jointly
directed by Dave Clark (CSAIL), Charles Fine (Sloan School of Management), and Andrew Lippman (Media Lab).

Connection Science and Engineering


Our lives have been transformed by networks that combine people and computers in new ways. They have revolutionized
the nature of the economy, business, government, politics, and our day-to-day existence. But there is little understanding of
the fundamental nature of these networks precisely because the combination of human and technological elements poses a
host of conceptual and empirical challenges. Our goal is to forge the foundations of an integrated framework for
understanding the connected world we live in. This requires a multidisciplinary, interdepartmental effort that leverages and
supports existing disciplinary network projects. The Center is jointly directed by Asu Ozdaglar (EECS) and Alex 'Sandy'
Pentland.

Consumer Electronics Laboratory


The Consumer Electronics Laboratory provides a unique research environment to explore ideas, make things, and innovate
in new directions for consumer products and services. Research projects, which span the entire Media Lab and beyond,
focus on: innovative materials and design/fabrication methods for them; new power technologies; new sensors, actuators,
and displays; self-managing, incrementally and limitlessly scalable ecosystems of smart devices; cooperative wireless
communications; co-evolution of devices and content; and user experience. An overarching theme that runs through all the
work is the co-evolution of design principles and technological discoveries, resulting in simple, ubiquitous, easy- and
delightful-to-use devices that know a great deal about one another, the world, and the people in their proximity.

Digital Life
Digital Life consortium activities engage virtually the entire faculty of the Media Lab around the theme of "open innovation."
Researchers divide the topic into three areas: open communications, open knowledge, and open everything. The first
explores the design and scalability of agile, grassroots communications systems that incorporate a growing understanding of
emergent social behaviors in a digital world; the second considers a cognitive architecture that can support many features of
"human intelligent thinking" and its expressive and economic use; and the third extends the idea of inclusive design to
immersive, affective, and biological interfaces and actions.

Things That Think


Things That Think is inventing the future of digitally augmented objects and environments. Toward this end, Things That
Think researchers are developing sophisticated sensing and computational architectures for networks of everyday things;
designing seamless interfaces that bridge the digital and physical worlds while meeting the human need for creative
expression; and creating an understanding of context and affect that helps things "think" at a much deeper level. Things That
Think projects under way at the Lab range from inventing the city car of the future to designing a prosthesis with the ability to
help a person or machine read social-emotional cues—research that will create the technologies and tools to redefine the
products and services of tomorrow.

Page ii October 2012 MIT Media Lab


V. Michael Bove Jr.—Object-Based Media ....................................................................................................................... 1

1. 3D Telepresence Chair .................................................................................................................................................. 1


2. Calliope .......................................................................................................................................................................... 1
3. Consumer Holo-Video .................................................................................................................................................... 1
4. Direct Fringe Writing of Computer-Generated Holograms ............................................................................................. 1
5. Everything Tells a Story ................................................................................................................................................. 1
6. Guided-Wave Light Modulator ....................................................................................................................................... 2
7. Infinity-by-Nine ............................................................................................................................................................... 2
8. Narratarium .................................................................................................................................................................... 2
9. ProtoTouch: Multitouch Interfaces to Everyday Objects ............................................................................................... 2
10. ShakeOnIt ...................................................................................................................................................................... 2
11. Simple Spectral Sensing ................................................................................................................................................ 2
12. Slam Force Net .............................................................................................................................................................. 3
13. SurroundVision .............................................................................................................................................................. 3
14. The "Bar of Soap": Grasp-Based Interfaces .................................................................................................................. 3
15. Vision-Based Interfaces for Mobile Devices .................................................................................................................. 3

Ed Boyden—Synthetic Neurobiology ............................................................................................................................... 4

16. Direct Engineering and Testing of Novel Therapeutic Platforms for Treatment of Brain Disorders ............................... 4
17. Exploratory Technologies for Understanding Neural Circuits ........................................................................................ 4
18. Hardware and Systems for Control of Neural Circuits with Light ................................................................................... 4
19. Molecular Reagents Enabling Control of Neurons and Biological Functions with Light ................................................. 5
20. Recording and Data-Analysis Technologies for Observing and Analyzing Neural Circuit Dynamics ............................ 5
21. Understanding Neural Circuit Computations and Finding New TherapeuticTargets ..................................................... 5

Cynthia Breazeal—Personal Robots ................................................................................................................................ 5

22. AIDA: Affective Intelligent Driving Agent ........................................................................................................................ 6


23. Cloud-HRI ...................................................................................................................................................................... 6
24. DragonBot: Android phone robots for long-term HRI ..................................................................................................... 6
25. Huggable: A Robotic Companion for Long-Term Health Care, Education, and Communication ................................... 6
26. MDS: Crowdsourcing Human-Robot Interaction: Online Game to Study Collaborative Human Behavior .................... 7
27. MDS: Exploring the Dynamics of Human-Robot Collaboration ...................................................................................... 7
28. Robotic Textiles ............................................................................................................................................................. 7
29. Socially Assistive Robotics: An NSF Expedition in Computing ...................................................................................... 7
30. Storytelling in the Preschool of Future ........................................................................................................................... 8
31. TinkRBook: Reinventing the Reading Primer ................................................................................................................ 8
32. Zipperbot: Robotic Continuous Closure for Fabric Edge Joining ................................................................................... 8

Leah Buechley—High-Low Tech ....................................................................................................................................... 8

33. Circuit Sketchbook ......................................................................................................................................................... 8


34. Codeable Objects .......................................................................................................................................................... 9
35. Computational Textiles Curriculum ................................................................................................................................ 9
36. Do-It-Your(Cell)f Phone ................................................................................................................................................. 9
37. Exploring Artisanal Technology ..................................................................................................................................... 9
38. LilyPad Arduino ............................................................................................................................................................ 10
39. LilyTiny ......................................................................................................................................................................... 10
40. Microcontrollers as Material ......................................................................................................................................... 10
41. Open Source Consumer Electronics ............................................................................................................................ 10
42. Programmable Paintings .............................................................................................................................................. 10
43. StoryClip ...................................................................................................................................................................... 10

Catherine Havasi—Digital Intuition ................................................................................................................................. 11

44. CharmMe ..................................................................................................................................................................... 11


45. ConceptNet .................................................................................................................................................................. 11
46. Corona ......................................................................................................................................................................... 11
47. Divisi: Reasoning Over Semantic Relationships .......................................................................................................... 11

MIT Media Lab October 2012 Page iii


48. Narratarium .................................................................................................................................................................. 12
49. Open Mind Common Sense ......................................................................................................................................... 12
50. Red Fish, Blue Fish ...................................................................................................................................................... 12
51. Semantic Synesthesia ................................................................................................................................................. 12
52. Story Space ................................................................................................................................................................. 12
53. The Glass Infrastructure .............................................................................................................................................. 12
54. Understanding Dialogue .............................................................................................................................................. 13

Hugh Herr—Biomechatronics ......................................................................................................................................... 13

55. Artificial Gastrocnemius ............................................................................................................................................... 13


56. Biomimetic Active Prosthesis for Above-Knee Amputees ............................................................................................ 13
57. Control of Muscle-Actuated Systems via Electrical Stimulation ................................................................................... 14
58. Effect of a Powered Ankle on Shock Absorption and Interfacial Pressure .................................................................. 14
59. FitSocket: A Better Way to Make Sockets ................................................................................................................... 14
60. Human Walking Model Predicts Joint Mechanics, Electromyography, and Mechanical Economy .............................. 14
61. Load-Bearing Exoskeleton for Augmentation of Human Running ............................................................................... 14
62. Powered Ankle-Foot Prosthesis ................................................................................................................................... 15
63. Sensor-Fusions for an EMG Controlled Robotic Prosthesis ........................................................................................ 15
64. Variable Impedance Prosthetic (VIPr) Socket Design ................................................................................................. 15

Cesar Hidalgo—Macro Connections .............................................................................................................................. 15

65. Cultural Exports ........................................................................................................................................................... 16


66. Immersion .................................................................................................................................................................... 16
67. Place Pulse .................................................................................................................................................................. 16
68. The Economic Complexity Observatory ...................................................................................................................... 16
69. The Language Group Network ..................................................................................................................................... 16

Henry Holtzman—Information Ecology .......................................................................................................................... 17

70. 8D Display .................................................................................................................................................................... 17


71. Air Mobs ....................................................................................................................................................................... 17
72. Brin.gy: What Brings Us Together ............................................................................................................................... 17
73. CoCam ......................................................................................................................................................................... 18
74. ContextController ......................................................................................................................................................... 18
75. CoSync ........................................................................................................................................................................ 18
76. Droplet ......................................................................................................................................................................... 18
77. Flow ............................................................................................................................................................................. 18
78. MindRider ..................................................................................................................................................................... 19
79. MobileP2P .................................................................................................................................................................... 19
80. NewsJack ..................................................................................................................................................................... 19
81. NeXtream: Social Television ........................................................................................................................................ 19
82. OpenIR ......................................................................................................................................................................... 19
83. Proverbial Wallets ........................................................................................................................................................ 19
84. StackAR ....................................................................................................................................................................... 20
85. SuperShoes ................................................................................................................................................................. 20
86. Tactile Allegory ............................................................................................................................................................ 20
87. The Glass Infrastructure .............................................................................................................................................. 20
88. Truth Goggles .............................................................................................................................................................. 20
89. Twitter Weather ............................................................................................................................................................ 21
90. Where The Hel ............................................................................................................................................................. 21

Hiroshi Ishii—Tangible Media ......................................................................................................................................... 21

91. Ambient Furniture ........................................................................................................................................................ 21


92. Beyond: A Collapsible Input Device for 3D Direct Manipulation .................................................................................. 21
93. FocalSpace .................................................................................................................................................................. 21
94. GeoSense .................................................................................................................................................................... 21
95. IdeaGarden .................................................................................................................................................................. 22
96. Jamming User Interfaces ............................................................................................................................................. 22
97. Kinected Conference ................................................................................................................................................... 22

Page iv October 2012 MIT Media Lab


98. MirrorFugue II .............................................................................................................................................................. 22
99. Peddl ............................................................................................................................................................................ 22
100. PingPongPlusPlus ....................................................................................................................................................... 23
101. Radical Atoms .............................................................................................................................................................. 23
102. Recompose .................................................................................................................................................................. 23
103. Relief ............................................................................................................................................................................ 23
104. RopeRevolution ........................................................................................................................................................... 23
105. SandScape .................................................................................................................................................................. 24
106. Sensetable ................................................................................................................................................................... 24
107. Sourcemap ................................................................................................................................................................... 24
108. T(ether) ........................................................................................................................................................................ 24
109. Tangible Bits ................................................................................................................................................................ 25
110. Topobo ......................................................................................................................................................................... 25
111. Video Play .................................................................................................................................................................... 25

Joseph M. Jacobson—Molecular Machines .................................................................................................................. 25

112. GeneFab ...................................................................................................................................................................... 25


113. NanoFab ...................................................................................................................................................................... 26
114. Synthetic Photosynthesis ............................................................................................................................................. 26

Sepandar Kamvar—Social Computing ........................................................................................................................... 26

115. The Dog Programming Language ................................................................................................................................ 26

Kent Larson—Changing Places ...................................................................................................................................... 26

116. A Market Economy of Trips .......................................................................................................................................... 26


117. AEVITA ........................................................................................................................................................................ 27
118. Autonomous Facades for Zero-Energy Urban Housing ............................................................................................... 27
119. BTNz! ........................................................................................................................................................................... 27
120. CityCar ......................................................................................................................................................................... 27
121. CityCar Folding Chassis ............................................................................................................................................. 28
122. CityCar Half-Scale Prototype ...................................................................................................................................... 28
123. CityCar Ingress-Egress Model ..................................................................................................................................... 28
124. CityCar Testing Platform .............................................................................................................................................. 28
125. CityHealth and Indoor Environment ............................................................................................................................. 28
126. CityHome ..................................................................................................................................................................... 29
127. CityHome: RoboWall .................................................................................................................................................... 29
128. Distinguish: Home Activity Recognition ....................................................................................................................... 29
129. FlickInk ......................................................................................................................................................................... 29
130. Hiriko CityCar Urban Feasibility Studies ...................................................................................................................... 29
131. Hiriko CityCar with Denokinn ....................................................................................................................................... 30
132. Home Genome: Mass-Personalized Housing .............................................................................................................. 30
133. HomeMaestro .............................................................................................................................................................. 30
134. Human Health Monitoring in Vehicles .......................................................................................................................... 30
135. Intelligent Autonomous Parking Environment ............................................................................................................. 31
136. Mass-Personalized Solutions for the Elderly ............................................................................................................... 31
137. Media Lab Energy and Charging Research Station ..................................................................................................... 31
138. MITes+: Portable Wireless Sensors for Studying Behavior in Natural Settings ........................................................... 31
139. Mobility on Demand Systems ...................................................................................................................................... 32
140. Open-Source Furniture ................................................................................................................................................ 32
141. Operator ....................................................................................................................................................................... 32
142. Participatory Environmental Sensing for Communities ............................................................................................... 32
143. PlaceLab and BoxLab .................................................................................................................................................. 32
144. Powersuit: Micro-Energy Harvesting ............................................................................................................................ 33
145. Robotic Facade / Personalized Sunlight ...................................................................................................................... 33
146. SeedPod: Interactive Farming Module ......................................................................................................................... 33
147. Shortest Path Tree ....................................................................................................................................................... 33
148. Smart Customization of Men's Dress Shirts: A Study on Environmental Impact ......................................................... 33
149. Smart DC MicroGrid ..................................................................................................................................................... 33
150. smartCharge ................................................................................................................................................................ 34

MIT Media Lab October 2012 Page v


151. Spike: Social Cycling ................................................................................................................................................... 34
152. Wheel Robots .............................................................................................................................................................. 34
153. WorkLife ....................................................................................................................................................................... 34

Henry Lieberman—Software Agents .............................................................................................................................. 35

154. Common-Sense Reasoning for Interactive Applications .............................................................................................. 35


155. CommonConsensus: A Game for Collecting Commonsense Goals ............................................................................ 35
156. E-Commerce When Things Go Wrong ........................................................................................................................ 35
157. Goal-Oriented Interfaces for Consumer Electronics .................................................................................................... 36
158. Goal-Oriented Interfaces for Mobile Phones ................................................................................................................ 36
159. Graphical Interfaces for Software Visualization and Debugging .................................................................................. 36
160. Human Goal Network ................................................................................................................................................... 36
161. Improving flexibility of Natural Language Interfaces by accommodating vague and ambiguous input ........................ 36
162. Learning Common Sense in a Second Language ....................................................................................................... 37
163. Multi-Lingual ConceptNet ............................................................................................................................................. 37
164. Multilingual Common Sense ........................................................................................................................................ 37
165. Navigating in Very Large Display Spaces .................................................................................................................... 37
166. Open Interpreter ........................................................................................................................................................... 37
167. ProcedureSpace: Managing Informality by Example ................................................................................................... 38
168. Programming in Natural Language .............................................................................................................................. 38
169. Raconteur: From Chat to Stories ................................................................................................................................. 38
170. Relational Analogies in Semantic Networks ................................................................................................................ 38
171. Ruminati: Tackling Cyberbullying with Computational Empathy .................................................................................. 38
172. Storied Navigation ........................................................................................................................................................ 39
173. Time Out: Reflective User Interface for Social Networks ............................................................................................. 39

Andy Lippman—Viral Spaces ......................................................................................................................................... 39

174. Air Mobs ....................................................................................................................................................................... 39


175. AudioFile ...................................................................................................................................................................... 39
176. Augmented Matter ....................................................................................................................................................... 40
177. Barter: A Market-Incented Wisdom Exchange ............................................................................................................. 40
178. Brin.gy: What Brings Us Together ............................................................................................................................... 40
179. BTNz! ........................................................................................................................................................................... 40
180. CoCam ......................................................................................................................................................................... 41
181. CoSync ........................................................................................................................................................................ 41
182. Electric Price Tags ....................................................................................................................................................... 41
183. Geo.gy: Location Shortener ......................................................................................................................................... 41
184. Line of Sound ............................................................................................................................................................... 41
185. LipSync ........................................................................................................................................................................ 42
186. Mapping Community Learning ..................................................................................................................................... 42
187. NewsFlash ................................................................................................................................................................... 42
188. Peddl ............................................................................................................................................................................ 42
189. Point & Shoot Data ...................................................................................................................................................... 42
190. Reach ........................................................................................................................................................................... 43
191. Recompose .................................................................................................................................................................. 43
192. Social Transactions/Open Transactions ...................................................................................................................... 43
193. T(ether) ........................................................................................................................................................................ 43
194. T+1 ............................................................................................................................................................................... 44
195. The Glass Infrastructure .............................................................................................................................................. 44
196. VR Codes ..................................................................................................................................................................... 44

Tod Machover—Opera of the Future .............................................................................................................................. 44

197. A Toronto Symphony: Massive Musical Collaboration ................................................................................................. 44


198. Advanced Audio Systems for Live Performance .......................................................................................................... 45
199. Death and the Powers: Redefining Opera ................................................................................................................... 45
200. Designing Immersive Multi-Sensory Eating Experiences ............................................................................................ 45
201. Disembodied Performance .......................................................................................................................................... 45
202. DrumTop ...................................................................................................................................................................... 45
203. Gestural Media Framework .......................................................................................................................................... 46

Page vi October 2012 MIT Media Lab


204. Hyperinstruments ......................................................................................................................................................... 46
205. Hyperscore ................................................................................................................................................................... 46
206. Media Scores ............................................................................................................................................................... 47
207. Personal Opera ............................................................................................................................................................ 47
208. Remote Theatrical Immersion: Extending "Sleep No More" ........................................................................................ 47
209. Vocal Vibrations: Expressive Performance for Body-Mind Wellbeing .......................................................................... 47

Pattie Maes—Fluid Interfaces ......................................................................................................................................... 48

210. Augmented Product Counter ....................................................................................................................................... 48


211. Blossom ....................................................................................................................................................................... 48
212. Community Data Portrait .............................................................................................................................................. 48
213. Cornucopia: Digital Gastronomy .................................................................................................................................. 49
214. Defuse .......................................................................................................................................................................... 49
215. Display Blocks .............................................................................................................................................................. 49
216. EyeRing: A Compact, Intelligent Vision System on a Ring .......................................................................................... 49
217. FlexDisplays ................................................................................................................................................................. 49
218. Hyperego ..................................................................................................................................................................... 50
219. Inktuitive: An Intuitive Physical Design Workspace ..................................................................................................... 50
220. InterPlay: Full-Body Interaction Platform ..................................................................................................................... 50
221. ioMaterials .................................................................................................................................................................... 50
222. Liberated Pixels ........................................................................................................................................................... 50
223. Light.Bodies ................................................................................................................................................................. 51
224. LuminAR ...................................................................................................................................................................... 51
225. MemTable .................................................................................................................................................................... 51
226. Mouseless .................................................................................................................................................................... 51
227. Moving Portraits ........................................................................................................................................................... 51
228. MTM "Little John" ......................................................................................................................................................... 52
229. Perifoveal Display ........................................................................................................................................................ 52
230. PoCoMo ....................................................................................................................................................................... 52
231. PreCursor ..................................................................................................................................................................... 52
232. Pulp-Based Computing: A Framework for Building Computers Out of Paper .............................................................. 52
233. Quickies: Intelligent Sticky Notes ................................................................................................................................. 53
234. ReachIn ........................................................................................................................................................................ 53
235. ReflectOns: Mental Prostheses for Self-Reflection ...................................................................................................... 53
236. Remnant: Handwriting Memory Card ........................................................................................................................... 53
237. Sensei: A Mobile Tool for Language Learning ............................................................................................................. 53
238. Shutters: A Permeable Surface for Environmental Control and Communication ......................................................... 54
239. Siftables: Physical Interaction with Digital Media ......................................................................................................... 54
240. Six-Forty by Four-Eighty: An Interactive Lighting System ............................................................................................ 54
241. SixthSense ................................................................................................................................................................... 54
242. SPARSH ...................................................................................................................................................................... 54
243. Spotlight ....................................................................................................................................................................... 55
244. Sprout I/O: A Texturally Rich Interface ........................................................................................................................ 55
245. Surflex: A Shape-Changing Surface ............................................................................................................................ 55
246. Swyp ............................................................................................................................................................................ 55
247. TaPuMa: Tangible Public Map ..................................................................................................................................... 56
248. TeleStudio .................................................................................................................................................................... 56
249. Textura ......................................................................................................................................................................... 56
250. The Relative Size of Things ......................................................................................................................................... 56
251. thirdEye ........................................................................................................................................................................ 56
252. Transitive Materials: Towards an Integrated Approach to Material Technology .......................................................... 57
253. VisionPlay .................................................................................................................................................................... 57
254. Watt Watcher ............................................................................................................................................................... 57

Frank Moss—New Media Medicine ................................................................................................................................. 57

255. CollaboRhythm ............................................................................................................................................................ 57


256. Collective Discovery ..................................................................................................................................................... 58
257. ForgetAboutIT? ............................................................................................................................................................ 58
258. I'm Listening ................................................................................................................................................................. 58

MIT Media Lab October 2012 Page vii


259. Oovit PT ....................................................................................................................................................................... 58

Neri Oxman—Mediated Matter ........................................................................................................................................ 59

260. 3D Printing of Functionally Graded Materials ............................................................................................................. 59


261. Beast ............................................................................................................................................................................ 59
262. Building-Scale 3D Printing ........................................................................................................................................... 59
263. Carpal Skin .................................................................................................................................................................. 59
264. CNSILK Pavilion .......................................................................................................................................................... 60
265. CNSILK: Computer Numerically Controlled Silk Cocoon Construction ....................................................................... 60
266. Digitally Reconfigurable Surface .................................................................................................................................. 60
267. FABRICOLOGY: Variable-Property 3D Printing as a Case for Sustainable Fabrication ............................................. 60
268. FitSocket: A Better Way to Make Sockets ................................................................................................................... 61
269. Macro Atom Additive Manufacturing ............................................................................................................................ 61
270. Mobile Office ................................................................................................................................................................ 61
271. Monocoque .................................................................................................................................................................. 61
272. Morphable Structures ................................................................................................................................................... 61
273. PCB Origami ................................................................................................................................................................ 61
274. Polyphemus Transport ................................................................................................................................................. 62
275. Rapid Craft ................................................................................................................................................................... 62
276. Raycounting ................................................................................................................................................................. 62
277. Responsive Glass ........................................................................................................................................................ 62
278. Robotic Light Expressions ........................................................................................................................................... 62
279. Shape Memory Inkjet ................................................................................................................................................... 63
280. SpiderBot ..................................................................................................................................................................... 63
281. Superconductive Powder Purification Device .............................................................................................................. 63

Joseph Paradiso—Responsive Environments .............................................................................................................. 63

282. A Machine Learning Toolbox for Musician Computer Interaction ................................................................................ 63


283. Beyond the Light Switch: New Frontiers in Dynamic Lighting ...................................................................................... 64
284. Chameleon Guitar: Physical Heart in a Virtual Body ................................................................................................... 64
285. Customizable Sensate Surface for Music Control ....................................................................................................... 64
286. Data-Driven Elevator Music ......................................................................................................................................... 64
287. Dense, Low-Power Environmental Monitoring for Smart Energy Profiling ................................................................... 65
288. Digito: A Fine-Grained, Gesturally Controlled Virtual Musical Instrument ................................................................... 65
289. DoppelLab: Spatialized Sonification in a 3D Virtual Environment ................................................................................ 65
290. DoppelLab: Tools for Exploring and Harnessing Multimodal Sensor Network Data .................................................... 65
291. Expressive Re-Performance ........................................................................................................................................ 66
292. Feedback Controlled Solid State Lighting .................................................................................................................... 66
293. FreeD ........................................................................................................................................................................... 66
294. Funk2: Causal Reflective Programming ...................................................................................................................... 66
295. Gesture Recognition Toolkit ......................................................................................................................................... 67
296. Grassroots Mobile Infrastructure .................................................................................................................................. 67
297. Hackable, High-Bandwidth Sensory Augmentation ..................................................................................................... 67
298. Patchwerk: Multi-User Network Control of a Massive Modular Synth .......................................................................... 67
299. Personal Video Layers for Privacy ............................................................................................................................... 67
300. Rapidnition: Rapid User-Customizable Gesture Recognition ...................................................................................... 68
301. Scalable and Versatile Surface for Ubiquitous Sensing ............................................................................................... 68
302. TRUSS: Tracking Risk with Ubiquitous Smart Sensing ............................................................................................... 68
303. Virtual Messenger ........................................................................................................................................................ 68
304. Wearable, Wireless Sensor System for Sports Medicine and Interactive Media ......................................................... 69
305. WristQue: A Personal Wristband for Sensing and Smart Infrastructure ...................................................................... 69

Alex 'Sandy' Pentland—Human Dynamics .................................................................................................................... 69

306. Economic Decision-Making in the Wild ........................................................................................................................ 69


307. Funf: Open Sensing Framework .................................................................................................................................. 69
308. openPDS: A Privacy-Preserving Personal Data Store ................................................................................................. 70
309. Sensible Organizations ................................................................................................................................................ 70
310. Social Signals in Biomedicine ...................................................................................................................................... 70

Page viii October 2012 MIT Media Lab


Rosalind W. Picard—Affective Computing .................................................................................................................... 70

311. Analysis of Autonomic Sleep Patterns ......................................................................................................................... 70


312. Auditory Desensitization Games .................................................................................................................................. 70
313. Automatic Stress Recognition in Real-Life Settings ..................................................................................................... 71
314. Cardiocam .................................................................................................................................................................... 71
315. CrowdCounsel ............................................................................................................................................................. 71
316. Customized Computer-Mediated Interventions ............................................................................................................ 71
317. Emotion and Memory ................................................................................................................................................... 71
318. Evaluation Tool for Recognition of Social-Emotional Expressions from Facial-Head Movements .............................. 72
319. Exploring Temporal Patterns of Smile ......................................................................................................................... 72
320. Externalization Toolkit .................................................................................................................................................. 72
321. FaceSense: Affective-Cognitive State Inference from Facial Video ............................................................................ 72
322. Facial Expression Analysis Over the Web ................................................................................................................... 73
323. FEEL: Frequent EDA Event Logger ............................................................................................................................. 73
324. Frame It ........................................................................................................................................................................ 73
325. Gesture Guitar ............................................................................................................................................................. 73
326. IDA: Inexpensive Networked Digital Stethoscope ........................................................................................................ 73
327. Infant Monitoring and Communication ......................................................................................................................... 73
328. Long-Term Physio and Behavioral Data Analysis ........................................................................................................ 74
329. Machine Learning and Pattern Recognition with Multiple Modalities ........................................................................... 74
330. Measuring Arousal During Therapy for Children with Autism and ADHD .................................................................... 74
331. Measuring Customer Experiences with Arousal .......................................................................................................... 74
332. Mobile Health Interventions for Drug Addiction and PTSD .......................................................................................... 75
333. Multimodal Computational Behavior Analysis ............................................................................................................. 75
334. Sensor-Enabled Measurement of Stereotypy and Arousal in Individuals with Autism ................................................. 75
335. Social + Sleep + Moods ............................................................................................................................................... 75
336. StoryScape .................................................................................................................................................................. 76
337. The Frustration of Learning Monopoly ......................................................................................................................... 76

Ramesh Raskar—Camera Culture .................................................................................................................................. 76

338. 6D Display .................................................................................................................................................................... 76


339. Bokode: Imperceptible Visual Tags for Camera-Based Interaction from a Distance ................................................... 76
340. CATRA: Mapping of Cataract Opacities Through an Interactive Approach ................................................................. 77
341. Coded Computational Photography ............................................................................................................................. 77
342. Compressive Sensing for Visual Signals ..................................................................................................................... 77
343. Layered 3D: Glasses-Free 3D Printing ........................................................................................................................ 77
344. LensChat: Sharing Photos with Strangers ................................................................................................................... 77
345. Looking Around Corners .............................................................................................................................................. 78
346. NETRA: Smartphone Add-On for Eye Tests ................................................................................................................ 78
347. PhotoCloud: Personal to Shared Moments with Angled Graphs of Pictures ............................................................... 78
348. Polarization Fields: Glasses-Free 3DTV ...................................................................................................................... 78
349. Portable Retinal Imaging .............................................................................................................................................. 79
350. Reflectance Acquisition Using Ultrafast Imaging ......................................................................................................... 79
351. Second Skin: Motion Capture with Actuated Feedback for Motor Learning ................................................................ 79
352. Shield Field Imaging .................................................................................................................................................... 79
353. Single Lens Off-Chip Cellphone Microscopy ............................................................................................................... 79
354. Slow Display ................................................................................................................................................................ 80
355. SpeckleSense .............................................................................................................................................................. 80
356. Tensor Displays: High-Quality Glasses-Free 3D TV .................................................................................................... 80
357. Theory Unifying Ray and Wavefront Lightfield Propagation ........................................................................................ 80
358. Trillion Frames Per Second Camera ............................................................................................................................ 80
359. Vision on Tap ............................................................................................................................................................... 81
360. VisionBlocks ................................................................................................................................................................. 81

MIT Media Lab October 2012 Page ix


Mitchel Resnick—Lifelong Kindergarten ....................................................................................................................... 81

361. App Inventor ................................................................................................................................................................. 81


362. Collab Camp ................................................................................................................................................................ 81
363. Computer Clubhouse ................................................................................................................................................... 82
364. Computer Clubhouse Village ....................................................................................................................................... 82
365. Drawdio ........................................................................................................................................................................ 82
366. Family Scratch Nights .................................................................................................................................................. 82
367. Learning with Data ....................................................................................................................................................... 82
368. MaKey MaKey .............................................................................................................................................................. 83
369. Map Scratch ................................................................................................................................................................. 83
370. MelodyMorph ............................................................................................................................................................... 83
371. Re·play ......................................................................................................................................................................... 83
372. Scratch ......................................................................................................................................................................... 83
373. Scratch Day ................................................................................................................................................................. 84
374. ScratchEd .................................................................................................................................................................... 84
375. ScratchJr ...................................................................................................................................................................... 84
376. Singing Fingers ............................................................................................................................................................ 84

Deb Roy—Cognitive Machines ....................................................................................................................................... 84

377. BlitzScribe: Speech Analysis for the Human Speechome Project ............................................................................... 85
378. Crowdsourcing the Creation of Smart Role-Playing Agents ........................................................................................ 85
379. HouseFly: Immersive Video Browsing and Data Visualization .................................................................................... 85
380. Human Speechome Project ......................................................................................................................................... 85
381. Speech Interaction Analysis for the Human Speechome Project ................................................................................ 85
382. Speechome Recorder for the Study of Child Development Disorders ......................................................................... 86

Chris Schmandt—Speech + Mobility .............................................................................................................................. 86

383. Back Talk ..................................................................................................................................................................... 86


384. Dotstorm ...................................................................................................................................................................... 86
385. Flickr This ..................................................................................................................................................................... 86
386. frontdesk ...................................................................................................................................................................... 87
387. Going My Way ............................................................................................................................................................. 87
388. Guiding Light ................................................................................................................................................................ 87
389. Indoor Location Sensing Using Geo-Magnetism ......................................................................................................... 87
390. InterTwinkles ................................................................................................................................................................ 87
391. LocoRadio .................................................................................................................................................................... 88
392. Musicpainter ................................................................................................................................................................. 88
393. OnTheRun ................................................................................................................................................................... 88
394. Puzzlaef ....................................................................................................................................................................... 88
395. Radio-ish Media Player ................................................................................................................................................ 88
396. ROAR .......................................................................................................................................................................... 89
397. SeeIt-ShareIt ................................................................................................................................................................ 89
398. Spellbound ................................................................................................................................................................... 89
399. Spotz ............................................................................................................................................................................ 89
400. Tin Can ........................................................................................................................................................................ 89
401. Tin Can Classroom ...................................................................................................................................................... 89

Ethan Zuckerman—Civic Media ...................................................................................................................................... 90

402. Between the Bars ......................................................................................................................................................... 90


403. Codesign Toolkit .......................................................................................................................................................... 90
404. Controversy Mapper .................................................................................................................................................... 90
405. Data Therapy ............................................................................................................................................................... 90
406. Grassroots Mobile Infrastructure .................................................................................................................................. 90
407. LazyTruth ..................................................................................................................................................................... 91
408. Mapping Banned Books ............................................................................................................................................... 91
409. Mapping the Globe ....................................................................................................................................................... 91
410. Media Cloud ................................................................................................................................................................. 91

Page x October 2012 MIT Media Lab


411. Media Meter ................................................................................................................................................................. 91
412. New Day New Standard ............................................................................................................................................... 92
413. NewsJack ..................................................................................................................................................................... 92
414. NGO 2.0 ....................................................................................................................................................................... 92
415. PageOneX ................................................................................................................................................................... 92
416. Social Mirror ................................................................................................................................................................. 92
417. T.I.C.K.L.E. .................................................................................................................................................................. 92
418. VoIP Drupal .................................................................................................................................................................. 93
419. Vojo.co ......................................................................................................................................................................... 93
420. VozMob ........................................................................................................................................................................ 93
421. What's Up ..................................................................................................................................................................... 93
422. Whose Voices? Twitter Citation in the Media .............................................................................................................. 93

MIT Media Lab October 2012 Page xi


V. Michael Bove Jr.—Object-Based Media
How sensing, understanding, and new interface technologies can change everyday life,
the ways in which we communicate with one another, storytelling, and entertainment.

1. 3D Telepresence V. Michael Bove Jr. and Daniel Novy


Chair
An autostereoscopic (no glasses) 3D display engine is combined with a "Pepper's
Ghost" setup to create an office chair that appears to contain a remote meeting
NEW LISTING
participant. The system geometry is also suitable for other applications such as
tabletop displays or automotive heads-up displays.

2. Calliope Edwina Portocarrero

Calliope is the follow-up to the NeverEnding Drawing Machine. A portable,


NEW LISTING
paper-based platform for interactive story making, it allows physical editing of
shared digital media at a distance. The system is composed of a network of creation
stations that seamlessly blend analog and digital media. Calliope documents and
displays the creative process with no need to interact directly with a computer. By
using human-readable tags and allowing any object to be used as material for
creation, it offers opportunities for cross-cultural and cross-generational
collaboration among peers with expertise in different media.

3. Consumer V. Michael Bove Jr., James D. Barabas, Sundeep Jolly and Daniel E. Smalley
Holo-Video
The goal of this project, building upon work begun by Stephen Benton and the
Spatial Imaging group, is to create an inexpensive desktop monitor for a PC or
game console that displays holographic video images in real time, suitable for
entertainment, engineering, or medical imaging. To date, we have demonstrated the
fast rendering of holo-video images (including stereographic images that unlike
ordinary stereograms have focusing consistent with depth information) from
OpenGL databases on off-the-shelf PC graphics cards; current research addresses
new optoelectronic architectures to reduce the size and manufacturing cost of the
display system.

Alumni Contributor: Quinn Y J Smithwick

4. Direct Fringe Writing V. Michael Bove Jr., Sundeep Jolly and University of Arizona College of
of Optical Sciences
Computer-Generated
Photorefractive polymer has many attractive properties for dynamic holographic
Holograms displays; however, the current display systems based around its use involve
generating holograms by optical interference methods that complicate the optical
NEW LISTING and computational architectures of the systems and limit the kinds of holograms that
can be displayed. We are developing a system to write computer-generated
diffraction fringes directly from spatial light modulators to photorefractive polymers,
resulting in displays with reduced footprint and cost, and potentially higher
perceptual quality.

5. Everything Tells a V. Michael Bove Jr., David Cranor and Edwina Portocarrero
Story
Following upon work begun in the Graspables project, we are exploring what
happens when a wide range of everyday consumer products can sense, interpret
into human terms (using pattern recognition methods), and retain memories, such
that users can construct a narrative with the aid of the recollections of the "diaries"
of their sporting equipment, luggage, furniture, toys, and other items with which they
interact.

MIT Media Lab October 2012 Page 1


6. Guided-Wave Light V. Michael Bove Jr., Daniel Smalley and Quinn Smithwick
Modulator
We are developing inexpensive, efficient, high-bandwidth light modulators based on
lithium niobate guided-wave technology. These modulators are suitable for
demanding, specialized applications such as holographic video displays, as well as
other light modulation uses such as compact video projectors.

7. Infinity-by-Nine V. Michael Bove Jr. and Daniel Novy

We expand the home-video viewing experience by generating imagery to extend


NEW LISTING
the TV screen and give the impression that the scene wraps completely around the
viewer. Optical flow, color analysis, and heuristics extrapolate beyond the screen
edge, where projectors provide the viewer's perceptual vision with low-detail
dynamic patterns that are perceptually consistent with the video imagery and
increase the sense of immersive presence and participation. We perform this
processing in real time using standard microprocessors and GPUs.

8. Narratarium V. Michael Bove Jr., Catherine Havasi, Katherine (Kasia) Hayden, Daniel Novy,
Jie Qi and Robert H. Speer
NEW LISTING
Remember telling scary stories in the dark with flashlights? Narratarium is an
immersive storytelling environment to augment creative play using texture, color,
and image. We are using natural language processing to listen to and understand
stories being told, and thematically augment the environment using color and
images. As a child tells stories about a jungle, the room is filled with greens and
browns and foliage comes into view. A traveling parent can tell a story to a child and
fill to room with images, color, and presence.

9. ProtoTouch: V. Michael Bove Jr. and David Cranor


Multitouch Interfaces
An assortment of everyday objects is given the ability to understand multitouch
to Everyday Objects
gestures of the sort used in mobile-device user interfaces, enabling people to use
such increasingly familiar gestures to control a variety of objects, and to "copy" and
"paste" configurations and other information among them.

10. ShakeOnIt V. Michael Bove Jr. and David Cranor

We are exploring ways to encode information exchange into preexisting natural


interaction patterns, both between people and between a single user and objects
with which he or she interacts on a regular basis. Two devices are presented to
provoke thoughts regarding these information interchange modalities: a pair of
gloves that requires two users to complete a "secret handshake" in order to gain
shared access to restricted information, and a doorknob that recognizes the grasp
of a user and becomes operational if the person attempting to use it is authorized to
do so.

11. Simple Spectral Andrew Bardagjy


Sensing
The availability of cheap LEDs and diode lasers in a variety of wavelengths enables
creation of simple and cheap spectroscopic sensors for specific tasks such as food
NEW LISTING
shopping and preparation, healthcare sensing, material identification, and detection
of contaminants or adulterants.

12. Slam Force Net V. Michael Bove Jr., Santiago Alfaro and Daniel Novy

A basketball net incorporates segments of conductive fiber whose resistance


NEW LISTING
changes with degree of stretch. By measuring this resistance over time, hardware
associated with this net can calculate force and speed of a basketball traveling
through the net. Applications include training, toys that indicate the force and speed
on a display, “dunk competitions,” and augmented reality effects on television
broadcasts. This net is far less expensive and more robust than other approaches

Page 2 October 2012 MIT Media Lab


to measuring data about the ball (e.g., photosensors or ultrasonic sensors) and
doesn’t require a physical change to the hoop or backboard other than providing
electrical connections to the net. Another application of the material is a flat net that
can measure velocity of a ball hit or pitched into it (as in baseball or tennis), and can
measure position as well (e.g., for determining whether a practice baseball pitch
would have been a strike).

13. SurroundVision V. Michael Bove Jr. and Santiago Alfaro

Adding augmented reality to the living room TV, we are exploring the technical and
creative implications of using a mobile phone or tablet (and possibly also dedicated
devices like toys) as a controllable "second screen" for enhancing television
viewing. Thus, a viewer could use the phone to look beyond the edges of the
television to see the audience for a studio-based program, to pan around a sporting
event, to take snapshots for a scavenger hunt, or to simulate binoculars to zoom in
on a part of the scene. Recent developments include the creation of a mobile device
app for Apple products and user studies involving several genres of broadcast
television programming.

14. The "Bar of Soap": V. Michael Bove Jr. and Brandon Taylor
Grasp-Based
We have built several handheld devices that combine grasp and orientation sensing
Interfaces
with pattern recognition in order to provide highly intelligent user interfaces. The Bar
of Soap is a handheld device that senses the pattern of touch and orientation when
it is held, and reconfigures to become one of a variety of devices, such as phone,
camera, remote control, PDA, or game machine. Pattern-recognition techniques
allow the device to infer the user's intention based on grasp. Another example is a
baseball that determines a user's pitching style as an input to a video game.

15. Vision-Based V. Michael Bove Jr. and Santiago Alfaro


Interfaces for Mobile
Mobile devices with cameras have enough processing power to do simple
Devices
machine-vision tasks, and we are exploring how this capability can enable new user
interfaces to applications. Examples include dialing someone by pointing the
camera at the person's photograph, or using the camera as an input to allow
navigating virtual spaces larger than the device's screen.

MIT Media Lab October 2012 Page 3


Ed Boyden—Synthetic Neurobiology
How to engineer intelligent neurotechnologies to repair pathology, augment cognition,
and reveal insights into the human condition.

16. Direct Engineering Gilberto Abram, Leah Acker, Zack Anderson, Nir Grossman, Xue Han, Mike
and Testing of Novel Henninger, Margaret Kim, Ekavali Mishra, Fumi Yoshida
Therapeutic
New technologies for controlling neural circuit dynamics, or entering information into
Platforms for the nervous system, may be capable of serving in therapeutic roles for improving
Treatment of Brain the health of human patients–enabling the restoration of lost senses, the control of
Disorders aberrant or pathological neural dynamics, and the augmentation of neural circuit
computation, through prosthetic means. We are assessing the translational
possibilities opened up by our technologies, exploring the safety and efficacy of
NEW LISTING
optogenetic neuromodulation in multiple animal models, and also pursuing, both in
our group and in collaborations with others, proofs-of-principle of new kinds of
optical neural control prosthetic. By combining observation of brain activity with
real-time analysis and responsive optical neurostimulation, new kinds of "brain
co-processor" may be possible which can work efficaciously with the brain to
augment its computational abilities, e.g., in the context of cognitive, emotional,
sensory, or motor disability.

17. Exploratory Brian Allen, Rachel Bandler, Steve Bates, Fei Chen, Jonathan Gootenberg,
Technologies for Suhasa Kodandaramaiah, Daniel Martin-Alarcon, Paul Tillberg, Aimei Yang
Understanding Neural
We are continually exploring new strategies for understanding neural circuits, often
Circuits in collaboration with other scientific, engineering, and biology research groups. If
you would like to collaborate on such a project, please contact us.
NEW LISTING
18. Hardware and Claire Ahn, Brian Allen, Michael Baratta, Jake Bernstein, Stephanie Chan,
Systems for Control Brian Chow, August Dietrich, Nir Grossman, Alexander Guerra, Mike
Henninger, Emily Ko, Alex Rodriguez, Jorg Scholvin, Giovanni Talei Franzesi,
of Neural Circuits
Ash Turza, Christian Wentz, Anthony Zo
with Light
The brain is a densely wired, heterogeneous circuit made out of thousands of
different kinds of cell. Over the last several years we have developed a set of
"optogenetic" reagents, fully genetically encoded reagents that, when targeted to
specific cells, enable their physiology to be controlled via light. To confront the 3D
complexity of the living brain, enabling the analysis of the circuits that causally drive
or support specific neural computations and behaviors, our lab and our
collaborators have developed hardware for delivery of light into the brain, enabling
control of complexly shaped neural circuits, as well as the ability to combinatorially
activate and silence neural activity in distributed neural circuits. We anticipate that
these tools will enable the systematic analysis of the brain circuits that
mechanistically and causally contribute to specific behaviors and pathologies.

19. Molecular Reagents Fei Chen, Yongku Cho, Brian Chow, Amy Chuong, Allison Dobry, Xue Han,
Enabling Control of Nathan Klapoetke, Albert Kwon, Mingjie Li, Daniel Martin-Alarcon, Tania
Morimoto, Xiaofeng Qian, Daniel Schmidt, Aimei Yang
Neurons and
Biological Functions Over the last several years our lab and our collaborators have pioneered a new
with Light area–the development of a number of fully genetically encoded reagents that, when
targeted to specific cells, enable their physiology to be controlled via light. These
reagents, known as optogenetic tools, enable temporally precise control of neural
electrical activity, cellular signaling, and other high-speed natural as well as
synthetic biology processes and pathways using light. Such tools are now in
widespread use in neuroscience, for the study of the neuron types and activity

Page 4 October 2012 MIT Media Lab


patterns that mechanistically and causally contribute to processes ranging from
cognition to emotion to movement, and to brain disorders. These tools are also
being evaluated as components of prototype neural control devices for ultra-precise
treatment of intractable brain disorders.

20. Recording and Brian Allen, Scott Arfin, Jake Bernstein, Brian Chow, Mike Henninger, Justin
Data-Analysis Kinney, Suhasa Kodandaramaiah, Caroline Moore-Kochlacs, Nikita Pak, Jorg
Scholvin, Annabelle Singer, Al Strelzoff, Giovanni Talei Franzesi, Ash Turza,
Technologies for
Christian Wentz, Ian Wicker
Observing and
Analyzing Neural The brain is a 3D, densely wired circuit that computes via large sets of widely
Circuit Dynamics distributed neurons interacting at fast timescales. To understand the brain, ideally it
would be possible to observe the activity of many neurons with as great a degree of
precision as possible, so as to understand the neural codes and dynamics that are
NEW LISTING
produced by the circuits of the brain. With collaborators, our lab is developing
innovations to enable such analyses of neural circuit dynamics. Such neural
observation strategies may also serve as detailed biomarkers of brain disorders, or
indicators of potential drug side effects. We have also developed robotic methods
for automated intracellular recording of neurons in the living brain, which uniquely
enables the characterizing of synaptic and ion channel influences on neural
computation with single-cell resolution. Such technologies may, in conjunction with
optogenetics, enable closed-loop neural control technologies, which can introduce
information into the brain as a function of brain state ("brain co-processors"),
enabling new kinds of circuit characterization tools as well as new kinds of
advanced brain-repair prosthetics.

21. Understanding Neural Carissa Jansen, Leah Acker, Brian Allen, Michael Baratta, Steve Bates, Sean
Circuit Computations Batir, Jake Bernstein, Tim Buschman, Huayu Ding, Stephen Eltinge, Xue Han,
Kyungman Kim, Suhasa Kodandaramaiah, Pei-Ann Lin, Carolina
and Finding New
Lopez-Trevino, Patrick Monahan, Caroline Moor
TherapeuticTargets
We are using our tools–such as optogenetic neural control and brain circuit
NEW LISTING dynamics measurement–both within our lab and in collaborations with others, to
analyze how specific sets of circuit elements within neural circuits give rise to
behaviors and functions such as cognition, emotion, movement, and sensation. We
are also determining which neural circuit elements can initiate or sustain
pathological brain states. Principles of controlling brain circuits may yield
fundamental insights into how best to go about treating brain disorders. Finally, we
are screening for neural circuit targets that, when altered, present potential
therapeutic benefits, and which may serve as potential drug targets or electrical
stimulation targets. In this way we hope to explore systematic, causal, temporally
precise analyses of how neural circuits function, yielding both fundamental scientific
insights and important clinically relevant principles.

Cynthia Breazeal—Personal Robots


How to build socially engaging robots and interactive technologies that provide people
with long-term social and emotional support to help people live healthier lives, connect
with others, and learn better.

22. AIDA: Affective Cynthia Breazeal and Kenton Williams


Intelligent Driving
Drivers spend a significant amount of time multi-tasking while they are behind the
Agent
wheel. These dangerous behaviors, particularly texting while driving, can lead to
distractions, and ultimately accidents. Many in-car interfaces designed to address
this issue still do not take a proactive role to assist the driver nor leverage aspects
of the driver's daily life to make the driving experience more seamless. In

MIT Media Lab October 2012 Page 5


collaboration with Volkswagen/Audi and the SENSEable City Lab we are developing
AIDA (Affective Intelligent Driving Agent), a robotic driver-vehicle interface that acts
as a sociable partner. AIDA elicits facial expressions and strong non-verbal cues for
engaging social interaction with the driver. AIDA also leverages the driver's mobile
device as its face, which promotes safety, offers proactive driver support and fosters
deeper personalization to the driver.

23. Cloud-HRI Cynthia Breazeal, Nicholas DePalma, Adam Setapen and Sonia Chernova

Imagine opening your eyes and being awake for only a half an hour at a time. This
NEW LISTING
is the life that robots traditionally live. This is due to a number of factors such as
battery life and wear on prototype joints. Roboticists have typically muddled though
this challenge by crafting handmade models of the world or performing machine
learning with synthetic data–and sometimes real-world data. While robotics
researchers have traditionally used large distributed systems to do perception,
planning, and learning, cloud-based robotics aims to link all of a robot's
experiences. This movement aims to build large-scale machine learning algorithms
that use experience from large groups of people, whether sourced from a large
number of tabletop robots or a large number of experiences with virtual agents.
Large-scale robotics aims to change embodied AI as it changed non-embodied AI.

24. DragonBot: Android Adam Setapen, Natalie Freed, and Cynthia Breazeal
phone robots for
DragonBot is a new platform built to support long-term interactions between children
long-term HRI
and robots. The robot runs entirely on an Android cell phone, which displays an
animated virtual face. Additionally, the phone provides sensory input (camera and
NEW LISTING microphone) and fully controls the actuation of the robot (motors and speakers).
Most importantly, the phone always has an Internet connection, so a robot can
harness cloud-computing paradigms to learn from the collective interactions of
multiple robots. To support long-term interactions, DragonBot is a "blended-reality"
character–if you remove the phone from the robot, a virtual avatar appears on the
screen and the user can still interact with the virtual character on the go. Costing
less than $1,000, DragonBot was specifically designed to be a low-cost platform
that can support longitudinal human-robot interactions "in the wild."

25. Huggable: A Robotic Cynthia Breazeal, Walter Dan Stiehl, Robert Toscano, Jun Ki Lee, Heather
Companion for Knight, Sigurdur Orn Adalgeirsson, Jeff Lieberman and Jesse Gray
Long-Term Health
The Huggable is a new type of robotic companion for health care, education, and
Care, Education, and social communication applications. The Huggable is much more than a fun,
Communication interactive robotic companion; it functions as an essential team member of a triadic
interaction. Therefore, the Huggable is not meant to replace any particular person in
a social network, but rather to enhance it. The Huggable is being designed with a
full-body sensitive skin with over 1500 sensors, quiet back-drivable actuators, video
cameras in the eyes, microphones in the ears, an inertial measurement unit, a
speaker, and an embedded PC with 802.11g wireless networking. An important
design goal for the Huggable is to make the technology invisible to the user. You
should not think of the Huggable as a robot but rather as a richly interactive teddy
bear.

Alumni Contributors: Matthew Berlin, Daniel Bernhardt (Cambridge University) and


Kuk-Hyun Han (Samsung)

Page 6 October 2012 MIT Media Lab


26. MDS: Crowdsourcing Cynthia Breazeal, Jason Alonso and Sonia Chernova
Human-Robot
Many new applications for robots require them to work alongside people as capable
Interaction: Online
members of human-robot teams. We have developed Mars Escape, a two-player
Game to Study online game designed to study how humans engage in teamwork, coordination, and
Collaborative Human interaction. Data gathered from hundreds of online games is being used to develop
Behavior computational models of human collaborative behavior in order to create an
autonomous robot capable of acting as a reliable human teammate. In the summer
of 2010, we will recreate the Mars Escape game in real life at the Boston Museum
of Science and invite museum visitors to perform collaborative tasks together with
the autonomous MDS robot Nexi.

27. MDS: Exploring the Cynthia Breazeal, Sigurdur Orn Adalgeirsson, Nicholas Brian DePalma, Jin
Dynamics of Joo Lee, Philipp Robbel; Alborz Geramifard, Jon How, Julie Shah (CSAIL);
Malte Jung and Pamela Hinds (Stanford)
Human-Robot
Collaboration As robots become more and more capable, we will begin to invite them into our
daily lives. There have been few examples of mobile robots able to carry out
NEW LISTING everyday tasks alongside humans. Though research on this topic is becoming more
and more prevalent, we are just now beginning to understand what it means to
collaborate. This project aims to unravel the dynamics involved in taking on
leadership roles in collaborative tasks as well as balancing and maintaining the
expectations of each member of the group (whether it be robot or human). This
challenge involves aspects of inferring internal human state, role support and
planning, as well as optimizing and keeping synchrony amongst team members
"tight" in their collaboration.

Alumni Contributors: Matthew Berlin and Jesse Gray

28. Robotic Textiles Cynthia Breazeal and Adam Whiton

We are investigating e-textiles and fiber-electronics to develop unique


soft-architecture robotic components. We have been developing large area force
sensors utilizing quantum tunneling composites integrated into textiles creating
fabrics that can cover the body/surface of the robot and sense touch. By using
e-textiles we shift from the metaphor of a sensing skin, often used in robotics, to
one of sensing clothing. We incorporated apparel design and construction
techniques to develop modular e-textile surfaces that can be easily attached to a
robot and integrated into a robotic system. Adding new abilities to a robot system
can become as simple as changing their clothes. Our goal is to study social touch
interaction and communication between people and robots while exploring the
benefits of textiles and the textile aesthetic.

29. Socially Assistive Tufts University, University of Southern California, Cynthia Breazeal,
Robotics: An NSF Jacqueline Marie Kory, Jin Joo Lee, David Robert, Edith Ackermann,
Catherine Havasi, Kasia Hayden with Stanford University, Sooyeon Jeong,
Expedition in
Willow Garage and Yale University
Computing
Our mission is to develop the computational techniques that will enable the design,
NEW LISTING implementation, and evaluation of "relational" robots, to encourage the social,
emotional, and cognitive growth in children, including those with social or cognitive
deficits. Funding for the project comes from the NSF Expeditions in Computing
program. This Expedition has the potential to substantially impact the effectiveness
of education and healthcare, and to enhance the lives of children and other groups
that require specialized support and intervention. In particular, the MIT effort is
focusing on developing second language learning companions for pre-school aged
children, ultimately for ESL (English as a Second Language).

MIT Media Lab October 2012 Page 7


30. Storytelling in the David Robert
Preschool of Future
Using the Preschool of the Future environment, children can create stories that
come to life in the real world. We are developing interfaces that enable children to
author stories in the physical environment—stories where robots are the characters
and children are not only the observers, but also the choreographers and actors in
the stories. To do this, children author stories and robot behaviors using a simple
digital painting interface. By combining the physical affordances of painting with
digital media and robotic characters, stories can come to life in the real world.
Programming in this environment becomes a group activity when multiple children
use these tangible interfaces to program advanced robot behaviors.

31. TinkRBook: Cynthia Breazeal, Angela Chang and David Scott Nunez
Reinventing the
TinkRBook is a storytelling system that introduces a new concept of reading, called
Reading Primer
textual tinkerability. Textual tinkerability uses storytelling gestures to expose the
text-concept relationships within a scene. Tinkerability prompts readers to become
more physically active and expressive as they explore concepts in reading together.
TinkRBooks are interactive storybooks that prompt interactivity in a subtle way,
enhancing communication between parents and children during shared picture-book
reading. TinkRBooks encourage positive reading behaviors in emergent literacy:
parents act out the story to control the words on-screen, demonstrating print
referencing and dialogic questioning techniques. Young children actively explore the
abstract relationship between printed words and their meanings, even before this
relationship is properly understood. By making story elements alterable within a
narrative, readers can learn to read by playing with how word choices impact the
storytelling experience. Recently, this research has been applied to developing
countries.

32. Zipperbot: Robotic Cynthia Breazeal and Adam Whiton


Continuous Closure
In robotics, the emerging field of electronic textiles and fiber-electronics represents
for Fabric Edge
a shift in morphology from hard and rigid mechatronic components toward a
Joining soft-architecture–and more specifically, a flexible planar surface morphology. It is
thus essential to determine how a robotic system might actuate flexible surfaces for
NEW LISTING donning and doffing actions. Zipperbot is a robotic continuous closure system for
joining fabrics and textiles. By augmenting traditional apparel closure techniques
and hardware with robotic attributes, we can incorporate these into robotic systems
for surface manipulation. Through actuating closures, textiles could shape shift or
self-assemble into a variety of forms.

Leah Buechley—High-Low Tech


How to engage diverse audiences in creating their own technology by situating
computation in new contexts and building tools to democratize engineering.

33. Circuit Sketchbook Leah Buechley and Jie Qi

The Circuit Sketchbook is a primer on creating expressive electronics using


NEW LISTING
paper-based circuits. Inside are explanations of useful components with example
circuits, as well as methods for crafting DIY switches and sensors from paper.
There are also circuit templates for building functional electronics directly on the
pages of the book.

Page 8 October 2012 MIT Media Lab


34. Codeable Objects Jennifer Jacobs and Leah Buechley

Codeable Objects is a library for Processing that allows people to design and build
NEW LISTING
objects using geometry and programing. Geometric computation offers a host of
powerful design techniques, but its use is limited to individuals with a significant
amount of programming experience or access to complex design software. In
contrast, Codeable objects allows a range of people, including novice coders,
designers and artists to rapidly design, customize and construct an artifact using
geometric computation and digital fabrication. The programming methods provided
by the library allow the user to program a wide range of structures and designs with
simple code and geometry. When the user compiles their code, the software
outputs tool paths based on their specifications, which can be used in conjunction
with digital fabrication tools to build their object.

35. Computational Leah Buechley and Kanjun Qiu


Textiles Curriculum
The Computational Textiles Curriculum is a collection of projects that leverages the
creativity and beauty inherent in e-textiles to create an introductory
NEW LISTING
computer-science curriculum for middle- and high-school students. The curriculum
is taught through a sequence of hands-on project explorations of increasing
difficulty, with each new project introducing new concepts in computer science,
ranging from basic control flow and abstraction to more complex ideas such as
networking, data processing, and algorithms. Additionally, the curriculum introduces
unique methods of working with the LilyPad Arduino, creating non-traditional
projects such as a game controller, a networked fabric piano, an activity monitor, or
a gesture recognition glove. The projects are validated, calibrated, and evaluated
through a series of workshops with middle- and high-school youth in the Boston
area.

36. Do-It-Your(Cell)f David A. Mellis and Leah Buechley


Phone
An exploration into the possibilities for individual construction and customization of
the most ubiquitous of electronic devices, the cellphone. By creating and sharing
NEW LISTING
open-source designs for the phone's circuit board and case, we hope to encourage
a proliferation of personalized and diverse mobile phones. Freed from the
constraints of mass production, we plan to explore diverse materials, shapes, and
functions. We hope that the project will help us explore and expand the limits of
do-it-yourself (DIY) practice. How close can a homemade project come to the
design of a cutting-edge device? What are the economics of building a high-tech
device in small quantities? Which parts are even available to individual consumers?
What's required for people to customize and build their own devices?

37. Exploring Artisanal Leah Buechley, Sam Jacoby and David A. Mellis
Technology
We are exploring the methods by which traditional artisans construct new electronic
technologies using contextually novel materials and processes, incorporating wood,
NEW LISTING
textiles, reclaimed and recycled products, as well as conventional circuitry. Such
artisanal technologies often address different needs, and are radically different in
form and function than conventionally designed and produced products.

MIT Media Lab October 2012 Page 9


38. LilyPad Arduino Leah Buechley

The LilyPad Arduino is a set of tools that empowers people to build soft, flexible,
fabric-based computers. A set of sewable electronic modules enables users to
blend textile craft, electrical engineering, and programming in surprising, beautiful,
and novel ways. A series of workshops that employed the LilyPad have
demonstrated that tools such as these, which introduce engineering from new
perspectives, are capable of involving unusual and diverse groups in technology
development. Ongoing research will explore how the LilyPad and similar devices
can engage under-represented groups in engineering, change popular assumptions
about the look and feel of technology, and spark hybrid communities that combine
rich crafting traditions with high-tech materials and processes.

39. LilyTiny Leah Buechley and Emily Marie Lovell

The LilyTiny is a small sewable breakout board for ATtiny85


NEW LISTING
microcontrollers–devices which may be integrated into circuits to enable
pre-determined interactions such as lights that flash or areas that can sense touch.
The circuit board can be pre-loaded with a program, enabling students to
incorporate dynamic behaviors into e-textile projects without having to know how to
program microcontrollers.

40. Microcontrollers as Leah Buechley, Sam Jacoby, David A. Mellis, Hannah Perner-Wilson and Jie
Material Qi

We’ve developed a set of tools and techniques that make it easy to use
NEW LISTING
microcontrollers as an art or craft material, embedding them directly into drawings
or other artifacts. We use the ATtiny45 from Atmel, a small and cheap (~$1)
microcontroller that can be glued directly to paper or other objects. We then
construct circuits using conductive silver ink, dispensed from squeeze bottles with
needle tips. This makes it possible to draw a circuit, adding lights, speakers, and
other electronic components.

41. Open Source David A. Mellis and Leah Buechley


Consumer
We offer case studies in the ways that digital fabrication allows us to treat the
Electronics
designs of products as a kind of source code: files that can be freely shared,
modified, and produced. In particular, the case studies combine traditional
electronic circuit boards and components (a mature digital fabrication process) with
laser-cut or 3D printed materials. They demonstrate numerous possibilities for
individual customizations both pre- and post-fabrication, as well as a variety of
potential production and distribution processes and scales.

42. Programmable Leah Buechley and Jie Qi


Paintings
Programmable Paintings are a series of artworks that use electronic elements such
as LED lights and microphone sensors as "pigments" in paintings. The goal is to
NEW LISTING
blend traditional elements of painting–color, texture, composition–with these
electronic components to create a new genre of time-based and interactive art.

43. StoryClip Leah Buechley and Sam Jacoby

Exploring conductive inks as an expressive medium for narrative storytelling,


NEW LISTING
StoryClip synthesizes electrical functionality, aesthetics, and creativity, to turn a
drawing into a multimedia interface that promotes rich, multi-level engagement with
children.

Page 10 October 2012 MIT Media Lab


Catherine Havasi—Digital Intuition
How to give computers human-like intuition so they can better understand us.

44. CharmMe Catherine Havasi, Brett Samuel Lazarus and Victor J Wang

CharmMe is a mobile social discovery application that helps people meet each
NEW LISTING
other during events. The application blends physical and digital proximity to help
you connect with with other like-minded individuals. Armed with RFID sensors and a
model of how the Lab works, CharmMe determines who you should talk to using
information including checking in to conference talks or “liking” projects using QR
codes. In addition, possible opening topics of conversation are suggested based on
users' expressed similar interests.

45. ConceptNet Catherine Havasi, Robert Speer, Henry Lieberman and Marvin Minsky

Imparting common-sense knowledge to computers enables a new class of


intelligent applications better equipped to make sense of the everyday world and
assist people with everyday tasks. Our approach to this problem is ConceptNet, a
freely available common-sense knowledge base that possesses a great breadth of
general knowledge that computers should already know, ready to be incorporated
into applications. ConceptNet 5 is a semantic network with millions of nodes and
edges, built from a variety of interlinked resources, both crowd-sourced and
expert-created, including the Open Mind Common Sense corpus, WordNet,
Wikipedia, and OpenCyc. It contains information in many languages including
English, Chinese, Japanese, Dutch, and Portuguese, resulting from a collaboration
of research projects around the world. In this newest version of ConceptNet, we aim
to automatically assess the reliability of its data when it is collected from variously
reliable sources and processes.

Alumni Contributors: Jason Alonso, Kenneth C. Arnold, Ian Eslick, Xinyu H. Liu and
Push Singh

46. Corona Rob Speer and Catherine Havasi

How can a knowledge base learn from the Internet, when you shouldn't trust
NEW LISTING
everything you read on the Internet? CORONA is a system for building a knowledge
base from a combination of reliable and unreliable sources, including
crowd-sourced contributions, expert knowledge, Games with a Purpose, automatic
machine reading, and even knowledge that is imperfectly derived from other
knowledge in the system. It confirms knowledge as reliable as more sources
confirm it or unreliable when sources disagree, and then by running the system in
reverse it can discover which knowledge sources are the most trustworthy.

47. Divisi: Reasoning Robert Speer, Catherine Havasi, Kenneth Arnold, and Jason Alonso
Over Semantic
We have developed technology that enables easy analysis of semantic data,
Relationships
blended in various ways with common-sense world knowledge. The results support
reasoning by analogy and association. A packaged library of code is being made
available to all sponsors.

48. Narratarium V. Michael Bove Jr., Catherine Havasi, Katherine (Kasia) Hayden, Daniel Novy,
Jie Qi and Robert H. Speer
NEW LISTING
Remember telling scary stories in the dark with flashlights? Narratarium is an
immersive storytelling environment to augment creative play using texture, color,
and image. We are using natural language processing to listen to and understand

MIT Media Lab October 2012 Page 11


stories being told, and thematically augment the environment using color and
images. As a child tells stories about a jungle, the room is filled with greens and
browns and foliage comes into view. A traveling parent can tell a story to a child and
fill to room with images, color, and presence.

49. Open Mind Common Michael Luis Puncel, Karen Anne Sittig and Robert H. Speer
Sense
The biggest problem facing artificial intelligence today is how to teach computers
enough about the everyday world so that they can reason about it like we do—so
that they can develop "common sense." We think this problem may be solved by
harnessing the knowledge of people on the Internet, and we have built a Web site to
make it easy and fun for people to work together to give computers the millions of
little pieces of ordinary knowledge that constitute "common sense." Teaching
computers how to describe and reason about the world will give us exactly the
technology we need to take the Internet to the next level, from a giant repository of
Web pages to a new state where it can think about all the knowledge it contains; in
essence, to make it a living entity.

Alumni Contributors: Jason Alonso, Kenneth C. Arnold, Ian Eslick, Henry


Lieberman, Xinyu H. Liu, Bo Morgan, Push Singh and Dustin Arthur Smith

50. Red Fish, Blue Fish Robert Speer and Catherine Havasi

With commonsense computing, we can discover trends in the topics that people are
talking about right now. Red Fish Blue Fish takes input in real time from lots of
political blogs, and creates a visualization of what topics are being discussed by the
left and the right.

51. Semantic Catherine Havasi, Jason Alonso and Robert H. Speer


Synesthesia
Semantic Synesthesia is a program that guesses a color to represent a given input
word or sentence, taking into account both physical descriptions of objects and
emotional connotations. This novel application of artificial intelligence uses
knowledge about the world to build a model of how people think about objects,
emotions, and colors, and uses this model to guess an appropriate color for a word.
Colorizer works over static text and real-time input, such as a speech recognition
stream. It has applications in games, arts, and Web page design.

52. Story Space Catherine Havasi and Michael Luis Puncel

The Analogy Space project, which is built upon ConceptNet has the ability to
NEW LISTING
identify similar concepts by building vectors out of them in a multi-dimensional
space. Story Space will apply this technique to human narrative in order to provide
a measure of similarity between different stories.

53. The Glass Henry Holtzman, Andy Lippman, Matthew Blackshaw, Jon Ferguson,
Infrastructure Catherine Havasi, Julia Ma, Daniel Schultz and Polychronis Ypodimatopoulos

This project builds a social, place-based information window into the Media Lab
using 30 touch-sensitive screens strategically placed throughout the physical
complex and at sponsor sites. The idea is get people to talk among themselves
about the work that they jointly explore in a public place. We present Lab projects
as dynamically connected sets of "charms" that visitors can save, trade, and
explore. The GI demonstrates a framework for an open, integrated IT system and
shows new uses for it.

Alumni Contributors: Rick Borovoy, Greg Elliott and Boris Grigory Kizelshteyn

Page 12 October 2012 MIT Media Lab


54. Understanding Catherine Havasi, Anjali Muralidhar and Personal Robots Group
Dialogue
In order to extend the Digital Intuition group's ability to understand human language,
a module that fills in the gaps of current technology must be developed to
NEW LISTING
understand dialogue. This module will be based on a dataset of recorded dialogues
between parents and children while reading an interactive E-book, created by the
Personal Robots group at the MIT Media Lab. The goal is for the module to be able
to identify the emotion and mood of the dialogue in order to make inferences about
what parents and children generally talk about when reading the book and make
suggestions about additional conversation topics. Conversations between an adult
and child while reading a book can greatly contribute to the learning and
development of young children.

Hugh Herr—Biomechatronics
How technology can be used to enhance human physical capability.

55. Artificial Hugh Herr and Ken Endo


Gastrocnemius
Human walking neuromechanical models show how each muscle works during
normal, level-ground walking. They are mainly modeled with clutches and linear
springs, and are able to capture dominant normal walking behavior. This suggests
to us to use a series-elastic clutch at the knee joint for below-knee amputees. We
have developed the powered ankle prosthesis, which generates enough force to
enable a user to walk "normally." However, amputees still have problems at the
knee joint due to the lack of gastrocnemius, which works as an ankle-knee flexor
and a plantar flexor. We hypothesize that metabolic cost and EMG patterns of an
amputee with our powered ankle and virtual gastrocnemius will dramatically
improve.

56. Biomimetic Active Ernesto C. Martinez-Villalpando and Hugh Herr


Prosthesis for
We propose a novel biomimetic active prosthesis for above-knee amputees. The
Above-Knee
clinical impact of this technology focuses on improving an amputee’s gait symmetry,
Amputees walking speed, and metabolic energy consumption on variant terrain conditions.
The electromechanical design of this robotic device mimics the body's own
musculoskeletal design, using actuator technologies that have muscle-like
behaviors and can integrate control methodologies that exploit the principles of
human locomotion. This work seeks to advance the field of biomechatronics by
contributing to the development of intelligent assistive technologies that adapt to the
needs of the physically challenged.

MIT Media Lab October 2012 Page 13


57. Control of Hugh Herr
Muscle-Actuated
Motivated by applications in rehabilitation and robotics, we are developing
Systems via
methodologies to control muscle-actuated systems via electrical stimulation. As a
Electrical Stimulation demonstration of such potential, we are developing centimeter-scale robotic
systems that utilize muscle for actuation and glucose as a primary source of fuel.
This is an interesting control problem because muscles: a) are mechanical
state-dependent actuators; b) exhibit strong nonlinearities; and c) have slow
time-varying properties due to fatigue-recuperation, growth-atrophy, and
damage-healing cycles. We are investigating a variety of adaptive and robust
control techniques to enable us to achieve trajectory tracking, as well as mechanical
power-output control under sustained oscillatory conditions. To implement and test
our algorithms, we developed an experimental capability that allows us to
characterize and control muscle in real time, while imposing a wide variety of
dynamical boundary conditions.

Alumni Contributor: Waleed A. Farahat

58. Effect of a Powered Hugh Herr and David Hill


Ankle on Shock
Lower-extremity amputees face a series of potentially serious post-operative
Absorption and
complications. Among these are increased risk of further amputations, excessive
Interfacial Pressure stress on the unaffected and residual limbs, and discomfort at the human-prosthesis
interface. Currently, conventional, passive prostheses have made strides towards
alleviating the risk of experiencing complications, but we believe that the limit of
“dumb” elastic prostheses has been reached; in order to make further strides we
must integrate “smart” technology in the form of sensors and actuators into
lower-limb prostheses. This project compares the elements of shock absorption and
socket pressure between passive and active ankle-foot prostheses. It is an attempt
to quantitatively evaluate the patient’s comfort.

59. FitSocket: A Better Hugh Herr, Neri Oxman, Arthur Petron and Roy Kornbluh (SRI)
Way to Make Sockets
Sockets–the cup-shaped devices that attach an amputated limb to a lower-limb
prosthesis–are made through unscientific, artisanal methods that do not have
repeatable quality and comfort from one amputee to the next. The FitSocket project
aims to identify the correlation between leg tissue properties and the design of a
comfortable socket. We accomplish this by creating a programmable socket called
the FitSocket which can iterate over hundreds of socket designs in minutes instead
of months.

60. Human Walking Hugh Herr and Ken Endo


Model Predicts Joint
We are studying the mechanical behavior of leg muscles and tendons during human
Mechanics,
walking in order to motivate the design of economical robotic legs. We hypothesize
Electromyography, that quasi-passive, series-elastic clutch units spanning the knee joint in a
and Mechanical musculoskeletal arrangement can capture the dominant mechanical behaviors of
Economy the human knee in level-ground walking. Biarticular elements necessarily need to
transfer energy from the knee joint to hip and/or ankle joints, and this mechanism
would reduce the necessary muscle work and improve the mechanical economy of
a human-like walking robot.

61. Load-Bearing Hugh Herr, Grant Elliott and Andrew Marecki


Exoskeleton for
Augmentation of human locomotion has proved an elusive goal. Natural human
Augmentation of
walking is extremely efficient and the complex articulation of the human leg poses
Human Running significant engineering difficulties. We present a wearable exoskeleton designed to
reduce the metabolic cost of jogging. The exoskeleton places a stiff fiberglass
spring in parallel with the complete leg during stance phase, then removes it so that
the knee may bend during leg swing. The result is a bouncing gait with reduced
reliance on the musculature of the knee and ankle.

Page 14 October 2012 MIT Media Lab


62. Powered Ankle-Foot Hugh Herr
Prosthesis
The human ankle provides a significant amount of net positive work during the
stance period of walking, especially at moderate to fast walking speeds.
Conversely, conventional ankle-foot prostheses are completely passive during
stance, and consequently, cannot provide net positive work. Clinical studies indicate
that transtibial amputees using conventional prostheses experience many problems
during locomotion, including a high gait metabolism, a low gait speed, and gait
asymmetry. Researchers believe the main cause for the observed locomotion is due
to the inability of conventional prostheses to provide net positive work during
stance. The objective of this project is to develop a powered ankle-foot prosthesis
that is capable of providing net positive work during the stance period of walking. To
this end, we are investigating the mechanical design and control system
architectures for the prosthesis. We also conduct a clinical evaluation of the
proposed prosthesis on different amputee participants.

Alumni Contributor: Samuel Au

63. Sensor-Fusions for Matthew Todd Farrell and Hugh Herr


an EMG Controlled
Current unmotorized prostheses do not provide adequate energy return during late
Robotic Prosthesis
stance to improve level-ground locomotion. Robotic prostheses can provide power
during late-stance to improve metabolic economy in an amputee during
level-ground walking. This project seeks to improve the types of terrain a robotic
ankle and successfully navigate by using command signals taken from the intact
and residual limbs of an amputee. By combining these commands signals with
sensors attached to the robotic ankle it might be possible to further understand the
role of physiological signals in the terrain adaptation of robotic ankles.

64. Variable Impedance Hugh Herr and David Sengeh


Prosthetic (VIPr)
Today, 100 percent of amputees experience some form of prosthetic socket
Socket Design
discomfort. This project involves the design and production of a comfortable,
variable impedance prosthetic (VIPr) socket using digital anatomical data for a
NEW LISTING transtibial amputee using computer-aided design and manufacturing (CAD/CAM).
The VIPr socket uses multiple materials to achieve compliance, thereby increasing
socket comfort for amputees, while maintaining structural integrity. The compliant
features are seamlessly integrated into the 3D printed socket to achieve lower
interface peak pressures over bony protuberances and other anatomical points in
comparison to a conventional socket. This lower peak pressure is achieved through
a design that uses anthropomorphic data acquired through surface scan and
Magnetic Resonance Imaging techniques. A mathematical transformation maps the
quantitative measurements of the human residual limb to the corresponding socket
shape and impedance characteristics, spatially.

Cesar Hidalgo—Macro Connections


How to transform data into knowledge.

65. Cultural Exports Shahar Ronen, Amy (Zhao) Yu and César A. Hidalgo

Cultural Exports introduces a new approach for studying both connections between
NEW LISTING
countries and the cultural impact of countries. Consider a native of a certain country
who becomes famous in other countries–this person is in a sense a "cultural export"
of his home country "imported" to other countries. For example, the popularity of
Dominican baseball player Manny Ramirez in the USA and Korea makes him a
cultural export of the Dominican Republic. Using Wikipedia biographies and

MIT Media Lab October 2012 Page 15


search-engine data, we measure the popularity of people across different countries
and languages, and break it down by each person's native country, period, and
occupation. This allows us to map international cultural trade and identify major
exporters and importers in different fields and times, as well as hubs for cultural
trade (e.g., Greece for philosophy in classical times or USA for baseball nowadays).

66. Immersion César Hidalgo, Deepak Jagdish and Daniel Smilkov

Immersion is a visual data experiment that delivers a fresh perspective of your email
NEW LISTING
inbox. Focusing on a people-centric approach rather than the content of the emails,
Immersion brings into view an important personal insight–the network of people you
are connected to via email, and how it evolves over the course of many years.
Given that this experiment deals with data that is extremely private, it is worthwhile
to note that when given secure access to your Gmail inbox (which you can revoke
anytime), Immersion only uses data from email headers and not a single word of
any email's subject or body content.

67. Place Pulse Phil Salesses, Anthony DeVincenzi and César A. Hidalgo

Place Pulse is a website that allows anybody to quickly run a crowdsourced study
and interactively visualize the results. It works by taking a complex question, such
as “Which place in Boston looks the safest?” and breaking it down into easier to
answer binary pairs. Internet participants are given two images and asked "Which
place looks safer?" From the responses, directed graphs are generated and can be
mined, allowing the experimenter to identify interesting patterns in the data and form
new hypothesis based on their observations. It works with any city or question and
is highly scalable. With an increased understanding of human perception, it should
be possible for calculated policy decisions to have a disproportionate impact on
public opinion.

68. The Economic Alex Simoes, Dany Bahar, Ricardo Hausmann and César A. Hidalgo
Complexity
With more than six billion people and 15 billion products, the world economy is
Observatory
anything but simple. The Economic Complexity Observatory is an online tool that
helps people explore this complexity by providing tools that can allow decision
makers to understand the connections that exist between countries and the myriad
of products they produce and/or export. The Economic Complexity Observatory
puts at everyone’s fingertips the latest analytical tools developed to visualize and
quantify the productive structure of countries and their evolution.

69. The Language Group Shahar Ronen, Kevin Hu, Michael Xu, and César A. Hidalgo
Network
Most interactions between cultures require overcoming a language barrier, which is
why multilingual speakers play an important role in facilitating such interactions. In
NEW LISTING
addition, certain languages–not necessarily the most spoken ones–are more likely
than others to serve as intermediary languages. We present the Language Group
Network, a new approach for studying global networks using data generated by tens
of millions of speakers from all over the world: a billion tweets, Wikipedia edits in all
languages, and translations of two million printed books. Our network spans over
eighty languages, and can be used to identify the most connected languages and
the potential paths through which information diffuses from one culture to another.
Applications include promotion of cultural interactions, prediction of trends, and
marketing.

Page 16 October 2012 MIT Media Lab


Henry Holtzman—Information Ecology
How to create seamless and pervasive connections between our physical environments
and information resources.

70. 8D Display Henry Holtzman, Matt Hirsch and Shahram Izadi

The 8D Display combines a glasses-free 3D display (4D light field output) with a
NEW LISTING
relightable display (4D light field input). The ultimate effect of this extension to our
earlier BiDi Screen project will be a display capable of showing physically realistic
objects that respond to scene lighting as we would expect. Imagine a shiny virtual
teapot in which you see your own reflection, a 3D model that can be lighted with a
real flashlight to expose small surface features, or a virtual flashlight that illuminates
real objects in front of the display. As the 8D Display captures light field input,
gestural interaction as seen in the BiDi Screen project is also possible.

71. Air Mobs Andy Lippman, Henry Holtzman and Eyal Toledano

Air Mobs is a community-based P2P cross-operator WiFi tethering market. It


NEW LISTING
provides network connectivity when one device has no available Internet connection
or roaming costs are too high, and another device has excellent network
connectivity and a full battery. Air Mobs barters air time between different mobile
phone users using WiFi tethering to locate and establish an Internet link though
another device that has a good 3G connection. The member that provides the link
will gain airtime credit that can be used when he is not connected. Air Mobs creates
incentive via a secondary market–a user will be willing to share his data connection
since he will get data in return. The synergetic value emerges when different users
on different mobile operators provide network access to each other, compensating
for each operator's out-of-coverage areas.

72. Brin.gy: What Brings Henry Holtzman, Andy Lippman and Polychronis Ypodimatopoulos
Us Together
We allow people to form dynamic groups focused on topics that emerge
serendipitously during everyday life. They can be long-lived or flower for a short
NEW LISTING
time. Examples include people interested in buying the same product, those with
similar expertise, those in the same location, or any collection of such attributes. We
call this the Human Discovery Protocol (HDP). Similar to how computers follow
well-established protocols like DNS in order to find other computers that carry
desired information, HDP presents an open protocol for people to announce bits of
information about themselves, and have them aggregated and returned back in the
form of a group of people that match against the user’s specified criteria. We
experiment with a web-based implementation (brin.gy) that allows users to join and
communicate with groups of people based on their location, profile information, and
items they may want to buy or sell.

MIT Media Lab October 2012 Page 17


73. CoCam Henry Holtzman, Andy Lippman, Dan Sawada and Eyal Toledano

Collaborating and media creation are difficult tasks, both for people and for network
NEW LISTING
architectures. CoCam is a self-organizing network for real-time camera image
collaboration. Like all camera apps, just point and shoot; CoCam then automatically
joins other media creators into a network of collaborators. Network discovery,
creation, grouping, joining, and leaving is done automatically in the background,
letting users focus on participation in an event. We use local P2P middleware and a
3G negotiation service to create these networks for real-time media sharing.
CoCam also provides multiple views that make the media experience more
exciting–such as appearing to be in multiple places at the same time. The media is
immediately distributed and replicated in multiple peers, thus if a camera phone is
confiscated other users have copies of the images.

74. ContextController Robert Hemsley, Arlene Ducao, Eyal Toledano and Henry Holtzman

ContextController is a second screen social TV application that augments linear


NEW LISTING
broadcast content with related contextual information. By utilizing existing
closed-captioning data, ContextController gathers related explanatory video
content, displaying this in real-time synchronized to the original content.

75. CoSync Henry Holtzman, Andy Lippman and Eyal Toledano

CoSync builds the ability to create and act jointly into mobile devices . This mirrors
NEW LISTING
the way we as a society act both individually and in concert. CoSync device ecology
combines multiple stand-alone devices and controls them opportunistically as if they
are one distributed, or diffuse, device at the user’s fingertips. CoSync includes a
programming interface that allows time synchronized coordination at a granularity
that will permit watching a movie on one device and hearing the sound from
another. The open API encourages an ever growing set of such finely coordinated
applications.

76. Droplet Robert Hemsley and Henry Holtzman

Droplet is a tangible interface which explores the movement of information between


NEW LISTING
digital and physical representations. Through light-based communication, the
project allows information to be easily extracted from its digital form behind glass
and converted into mobile, tangible representations, altering its form and our
perception of the information.

77. Flow Robert Hemsley and Henry Holtzman

Flow is an augmented interaction project that bridges the divide between our non
NEW LISTING
digital objects and items and our ecosystem of connected devices. By using
computer vision Flow enables our traditional interactions to be augmented with
digital meaning allowing an event in one environment to flow into the next. Through
this physical actions such as tearing a document can have a mirrored effect and
meaning in our digital environment leading to actions such as the deletion of the
associated digital file. This project is part of an initial exploration that focuses on
creating an augmented interaction overlay for our environment enabling users to
redefine their physical actions.

78. MindRider Arlene Ducao and Henry Holtzman

MindRider is a helmet that translates electroencephalogram (EEG) feedback into an


NEW LISTING
embedded LED display. For the wearer, green lights indicate a focused, active
mental state, while red lights indicate drowsiness, anxiety, and other states not
conducive to operating a bike or vehicle. Flashing red lights indicate extreme
anxiety (panic). As many people return to cycling as a primary means of
transportation, MindRider can support safety by adding visibility and increased
awareness to the cyclist/motorist interaction process. In future versions, MindRider

Page 18 October 2012 MIT Media Lab


may be outfitted with an expanded set of EEG contacts, proximity sensors,
non-helmet wearable visualization, and other features to increase the cyclist's
awareness of self and environment. These features may also allow for hands-free
control of cycle function. A networked set of MindRiders may be useful for tracking,
trauma, and disaster situations.

79. MobileP2P Yosuke Bando, Eyal Toledano, Robert Hemsley, Mary Linnell, Dan Sawada
and Henry Holtzman
NEW LISTING
MobileP2P aims to magically populate mobile devices with popular video clips and
app updates without using people's data plans by opportunistically connecting
nearby devices together when they are in range of each other.

80. NewsJack Sasha Costanza-Chock, Henry Holtzman, Ethan Zuckerman and Daniel E.
Schultz
NEW LISTING
NewsJack is a media remixing tool built from Mozilla's Hackasaurus. It allows users
to modify the front pages of news sites, changing language and headlines to
change the news into what they wish it could be.

81. NeXtream: Social Henry Holtzman, ReeD Martin and Mike Shafran
Television
Functionally, television content delivery has remained largely unchanged since the
introduction of television networks. NeXtream explores an experience where the
role of the corporate network is replaced by a social network. User interests,
communities, and peers are leveraged to determine television content, combining
sequences of short videos to create a set of channels customized to each user. This
project creates an interface to explore television socially, connecting a user with a
community through content, with varying levels of interactivity: from passively
consuming a series, to actively crafting one's own television and social experience.

Alumni Contributor: Ana Luisa Santos

82. OpenIR Aziz Alghunaim, Ilias Koen, Henry Holtzman, Arlene Brigoli Ducao, Juhee Bae
and Stephanie New
NEW LISTING
When an environmental crisis strikes, the most important element to saving lives is
information. Information regarding water depths, spread of oil, fault lines, burn
scars, and elevation are all crucial in the face of disaster. Much of this information is
publicly available as infrared satellite data. However, with today’s technology, this
data is difficult to obtain, and even more difficult to interpret. Open Infrared, or
OpenIR, is an ICT (information communication technology) offering geo-located
infrared satellite data as on-demand map layers and translating the data so that
anyone can understand it easily. OpenIR will be pilot tested in Indonesia, where
ecological and economic vulnerability is apparent from frequent seismic activity and
limited supporting infrastructure. The OpenIR team will explore how increased
accessibility to environmental information can help infrastructure-challenged regions
to deal with environmental crises of many kinds.

83. Proverbial Wallets Henry Holtzman, John Kestner, Daniel Leithinger, Danny Bankman, Emily Tow
and Jaekyung Jung

We have trouble controlling our consumer impulses, and there's a gap between our
decisions and the consequences. When we pull a product off the shelf, do we know
our bank-account balance, or whether we're over budget for the month? Our
existing senses are inadequate to warn us. The Proverbial Wallet fosters a financial
sense at the point of purchase by embodying our electronically tracked assets. We
provide tactile feedback reflecting account balances, spending goals, and
transactions as a visceral aid to responsible decision-making.

MIT Media Lab October 2012 Page 19


84. StackAR Robert Hemsley and Henry Holtzman

StackAR explores the augmentation of physical objects within a digital environment


NEW LISTING
by abstracting interfaces from physical to virtual implementations. StackAR is a
Lilypad Arduino shield that enables capacitive touch and light base communication
with a tablet. When pressed against a screen, the functionality of StackAR extends
into the digital environment, allowing the object to become augmented by the
underlying display. This creates an augmented breadboard environment where
virtual and physical components can be combined and prototyped in a more intuitive
manner.

85. SuperShoes Dhairya Dand and Henry Holtzman

Our smartphones take active attention while we use them to navigate streets, find
NEW LISTING
restaurants, meet friends, and remind us of tasks. SuperShoes allows us to access
this information in a physical ambient form through a foot interface. SuperShoes
takes us to our destination; senses interesting people, places, and events in our
proximity; and notifies us about tasks, all while we immerse ourselves in the
environment. We explore a physical language of interaction afforded by the foot
through various tactile senses. By weaving digital bits into the shoes, SuperShoes
liberates information from the confines of screens and onto the body.

86. Tactile Allegory Henry Holtzman and Philippa Mothersill

We have an instinctive perception of the abstract and emotional qualities of objects


NEW LISTING
through our interactions with their forms and textures. Tactile Allegory is an
exploration into the use of form and texture as a means of communicating this
additional emotive information to us through objects. The Emotional Design
Language Spectrum is a map of the variety of formal and textural designs which can
evoke different emotional experiences. Applying this tool to the design of the form
and texture of objects can add an extra layer of information to subtly inform us of
emotional experiences related to this object in a rich non-verbal medium.

87. The Glass Henry Holtzman, Andy Lippman, Matthew Blackshaw, Jon Ferguson,
Infrastructure Catherine Havasi, Julia Ma, Daniel Schultz and Polychronis Ypodimatopoulos

This project builds a social, place-based information window into the Media Lab
using 30 touch-sensitive screens strategically placed throughout the physical
complex and at sponsor sites. The idea is get people to talk among themselves
about the work that they jointly explore in a public place. We present Lab projects
as dynamically connected sets of "charms" that visitors can save, trade, and
explore. The GI demonstrates a framework for an open, integrated IT system and
shows new uses for it.

Alumni Contributors: Rick Borovoy, Greg Elliott and Boris Grigory Kizelshteyn

88. Truth Goggles Henry Holtzman and Daniel E. Schultz

Truth Goggles attempts to decrease the polarizing effect of perceived media bias by
NEW LISTING
forcing people to question all sources equally by invoking fact -checking services at
the point of media consumption. Readers will approach even their most trusted
sources with a more critical mentality by viewing content through various "lenses" of
truth.

89. Twitter Weather Henry Holtzman, John Kestner and Stephanie Bian

The vast amounts of user-generated content on the Web produce information


overload as frequently as they provide enlightenment. Twitter Weather reduces
large quantities of text into meaningful data by gauging its emotional content. This
Website visualizes the prevailing mood about top Twitter topics by rendering a
weather-report-style display. Comment Weather is its counterpart for article

Page 20 October 2012 MIT Media Lab


comments, allowing you to gauge sentiment without leaving the page. Supporting
Twitter Weather is a user-trained Web service that aggregates and visualizes
attitudes on a topic.

90. Where The Hel Arlene Ducao and Henry Holtzman

"Where The Hel" is a pair of helmets: plain and funky. The funky helmet is 3D
NEW LISTING
printed; the plain helmet visualizes proximity to the funky helmet as a function of
signal strength, via an LED light strip. The funky helmet contains an Xbee and a
GPS Radio. Its position is tracked via a web app. The wearer of the plain helmet
can track the funky one via the web app and the LED strip on his helmet. These
helmets are potential iterations towards a more developed HADR (Humanitarian
Assistance and Disaster Relief) helmet system.

Hiroshi Ishii—Tangible Media


How to design seamless interfaces between humans, digital information, and the
physical environment.

91. Ambient Furniture Hiroshi Ishii, David Rose, and Shaun Salzberg

Furniture is the infrastructure for human activity. Every day we open cabinets and
NEW LISTING
drawers, pull up to desks, recline in recliners, and fall into bed. How can technology
augment these everyday rituals in elegant and useful ways? The Ambient Furniture
project mixes apps with the IKEA catalog to make couches more relaxing, tables
more conversational, desks more productive, lamps more enlightening, and beds
more restful. With input from Vitra and Steelcase, we are prototyping a line of
furniture to explore ideas about peripheral awareness (Google Latitude door bell),
incidental gestures (Amazon restocking trash can and the Pandora lounge chair),
pre-attentive processing (energy clock), and eavesdropping interfaces (FaceBook
photo coffee table).

92. Beyond: A Jinha Lee and Hiroshi Ishii


Collapsible Input
Beyond is a collapsible input device for direct 3D manipulation. When pressed
Device for 3D Direct
against a screen, Beyond collapses in the physical world and extends into the digital
Manipulation space of the screen, so that users have an illusion that they are inserting the tool
into the virtual space. Beyond allows users to interact directly with 3D media without
having to wear special glasses, avoiding inconsistencies of input and output. Users
can select, draw, and sculpt in 3D virtual space, and seamlessly transition between
2D and 3D manipulation.

93. FocalSpace Hiroshi Ishii, Anthony DeVincenzi and Lining Yao

FocalSpace is a system for focused collaboration utilizing spatial depth and


NEW LISTING
directional audio. We present a space where participants, tools, and other physical
objects within the space are treated as interactive objects that can be detected,
selected, and augmented with metadata. Further, we demonstrate several
scenarios of interaction as concrete examples. By utilizing diminishing reality to
remove unwanted background surroundings through synthetic blur, the system aims
to attract participant attention to foreground activity.

94. GeoSense Hiroshi Ishii, Anthony DeVincenzi and Samuel Luescher

An open publishing platform for visualization, social sharing, and data analysis of
NEW LISTING
geospatial data.

MIT Media Lab October 2012 Page 21


95. IdeaGarden Hiroshi Ishii, David Lakatos, and Lining Yao

The IdeaGarden allows participants of creative activities to collectively capture,


select, and share (CCSS) the stories, sketches, and ideas they produce in physical
and digital spaces. The iGarden attempts to optimize the CCSS loop and to bring it
from hours to seconds in order to turn asynchronous collaborative thought
processes into synchronous real-time cognitive flows. The iGarden system is
composed of a tangible capturing system including recording devices always
"at-hand", of a selection workflow that allows the group to reflect and reduce the
complexity of captured data in real-time and of a sharing module that connects
socially selected information to the cloud.

Alumni Contributor: Jean-Baptiste Labrune

96. Jamming User Hiroshi Ishii, Sean Follmer, Daniel Leithinger, Alex Olwal and Nadia Cheng
Interfaces
Malleable user interfaces have the potential to enable radically new forms of
interactions and expressiveness through ?exible, free-form and computationally
NEW LISTING
controlled shapes and displays. This work, speci?cally focuses on particle jamming
as a simple, effective method for ?exible, shape-changing user interfaces where
programmatic control of material stiffness enables haptic feedback, deformation,
tunable affordances and control gain. We introduce a compact, low-power
pneumatic jamming system suitable for mobile devices, and a new hydraulic-based
technique with fast, silent actuation and optical shape sensing. We enable jamming
structures to sense input and function as interaction devices through two
contributed methods for high-resolution shape sensing using: 1) index-matched
particles and ?uids, and 2) capacitive and electric ?eld sensing. We explore the
design space of malleable and organic user interfaces enabled by jamming through
four motivational prototypes that highlight jamming’s potential in HCI, including
applications for tabletops, tablets and for portable shape-changing mobile devices.

97. Kinected Conference Anthony DeVincenzi, Lining Yao, Hiroshi Ishii and Ramesh Raskar

How could we enhance the experience of video-conference by utilizing an


interactive display? With a Kinect camera and sound sensors, we explore how
expanding a system's understanding of spatially calibrated depth and audio
alongside a live video stream can generate semantically rich three-dimensional
pixels, containing information regarding their material properties and location. Four
features have been implemented: Talking to Focus, Freezing Former Frames,
Privacy Zone, and Spacial Augmenting Reality.

98. MirrorFugue II Xiao Xiao and Hiroshi Ishii

MirrorFugue is an interface for the piano that bridges the gap of location in music
playing by connecting pianists in a virtual shared space reflected on the piano. Built
on a previous design that only showed the hands, our new prototype displays both
the hands and upper body of the pianist. MirrorFugue may be used for watching a
remote or recorded performance, taking a remote lesson, and remote duet playing.

99. Peddl Andy Lippman, Hiroshi Ishii, Matthew Blackshaw, Anthony DeVincenzi and
David Lakatos
NEW LISTING
Peddl creates a localized, perfect market. All offers are broadcasts, allowing users
to spot trends, bargains, and opportunities. With GPS- and Internet-enabled mobile
devices in almost every pocket, we see an opportunity for a new type of
marketplace which takes into account your physical location, availability, and open
negotiation. Like other real-time activities, we are exploring transactions as an
organizing principle among people that, like Barter, may be strong, rich, and
long-lived.

Page 22 October 2012 MIT Media Lab


100. PingPongPlusPlus Hiroshi Ishii, Xiao Xiao, Michael Bernstein, Lining Yao, Dávid Lakatos, Kojo
Acquah, Jeff Chan, Sean Follmer and Daniel Leithinger

PingPong++ (PingPongPlusPlus) builds on PingPongPlus (1998), a ping pong table


that could sense ball hits, and reuse that data to control visualizations projected on
the table. We have redesigned the system using open-source hardware and
software platforms so that anyone in the world can build their own reactive table.
We are exploring ways that people can customize their ping pong game experience.
This kiosk allows players to create their own visualizations based on a set of
templates. For more control of custom visualizations, we have released a software
API based on the popular Processing language to enable users to write their own
visualizations. We are always looking for collaborators! Visit pppp.media.mit.edu to
learn more.

101. Radical Atoms Hiroshi Ishii, Leonardo Bonanni, Keywon Chung, Sean Follmer, Jinha Lee,
Daniel Leithinger and Xiao Xiao

Radical Atoms is our vision of interactions with future material.

Alumni Contributors: Keywon Chung, Adam Kumpf, Amanda Parkes, Hayes Raffle
and Jamie B Zigelbaum

102. Recompose Hiroshi Ishii, Matthew Blackshaw, Anthony DeVincenzi and David Lakatos

Human beings have long shaped the physical environment to reflect designs of form
and function. As an instrument of control, the human hand remains the most
fundamental interface for affecting the material world. In the wake of the digital
revolution, this is changing, bringing us to reexamine tangible interfaces. What if we
could now dynamically reshape, redesign, and restructure our environment using
the functional nature of digital tools? To address this, we present Recompose, a
framework allowing direct and gestural manipulation of our physical environment.
Recompose complements the highly precise, yet concentrated affordance of direct
manipulation with a set of gestures, allowing functional manipulation of an actuated
surface.

103. Relief Hiroshi Ishii and Daniel Leithinger

Relief is an actuated tabletop display, able to render and animate 3D shapes with a
malleable surface. It allows users to experience and form digital models such as
geographical terrain in an intuitive manner. The tabletop surface is actuated by an
array of motorized pins, which can be addressed individually and sense user input
like pulling and pushing. Our current research focuses on utilizing freehand
gestures for interacting with content on Relief.

Alumni Contributor: Adam Kumpf

104. RopeRevolution Jason Spingarn-Koff (MIT), Hiroshi Ishii, Sayamindu Dasgupta, Lining Yao,
Nadia Cheng (MIT Mechanical Engineering) and Ostap Rudakevych (Harvard
University Graduate School of Design)

Rope Revolution is a rope-based gaming system for collaborative play. After


identifying popular rope games and activities from around the world, we developed
a generalized tangible rope interface that includes a compact motion-sensing and
force-feedback module that can be used for a variety of rope-based games, such as
rope jumping, kite flying, and horseback riding. Rope Revolution is designed to
foster both co-located and remote collaborative experiences by using actual rope to
connect players in physical activities across virtual spaces.

MIT Media Lab October 2012 Page 23


105. SandScape Carlo Ratti, Assaf Biderman and Hiroshi Ishii

SandScape is a tangible interface for designing and understanding landscapes


through a variety of computational simulations using sand. The simulations are
projected on the surface of a sand model representing the terrain; users can choose
from a variety of different simulations highlighting height, slope, contours, shadows,
drainage, or aspect of the landscape model, and alter its form by manipulating sand
while seeing the resulting effects of computational analysis generated and projected
on the surface of sand in real time. SandScape demonstrates an alternative form of
computer interface (tangible user interface) that takes advantage of our natural
abilities to understand and manipulate physical forms while still harnessing the
power of computational simulation to help in our understanding of a model
representation.

Alumni Contributors: Yao Wang, Jason Alonso and Ben Piper

106. Sensetable Hiroshi Ishii

Sensetable is a system that wirelessly, quickly, and accurately tracks the positions
of multiple objects on a flat display surface. The tracked objects have a digital state,
which can be controlled by physically modifying them using dials or tokens. We
have developed several new interaction techniques and applications on top of this
platform. Our current work focuses on business supply-chain visualization using
system-dynamics simulation.

Alumni Contributors: Jason Alonso, Dan Chak, Gian Antonio Pangaro, James
Patten and Matt Reynolds

107. Sourcemap Hiroshi Ishii and Leonardo Amerigo Bonanni

Sourcemap.com is the open directory of supply chains and environmental footprints.


Consumers use the site to learn about where products come from, what they’re
made of, and how they impact people and the environment. Companies use
Sourcemap to communicate transparently with consumers and tell the story of how
products are made. Thousands of maps have already been created for food,
furniture, clothing, electronics, and more. Behind the website is a revolutionary
social network for supply-chain reporting. The real-time platform gathers information
from every stakeholder so that–one day soon–you’ll be able to scan a product on a
store shelf and know exactly who made it.

108. T(ether) Hiroshi Ishii, Andy Lippman, Matthew Blackshaw and David Lakatos

T(ether) is a novel spatially aware display that supports intuitive interaction with
volumetric data. The display acts as a window affording users a perspective view of
three- dimensional data through tracking of head position and orientation. T(ether)
creates a 1:1 mapping between real and virtual coordinate space allowing
immersive exploration of the joint domain. Our system creates a shared workspace
in which co-located or remote users can collaborate in both the real and virtual
worlds. The system allows input through capacitive touch on the display and a
motion-tracked glove. When placed behind the display, the user’s hand extends into
the virtual world, enabling the user to interact with objects directly.

109. Tangible Bits Hiroshi Ishii, Sean Follmer, Jinha Lee, Daniel Leithinger and Xiao Xiao

People have developed sophisticated skills for sensing and manipulating our
physical environments, but traditional GUIs (Graphical User Interfaces) do not
employ most of them. Tangible Bits builds upon these skills by giving physical form
to digital information, seamlessly coupling the worlds of bits and atoms. We are
designing "tangible user interfaces" that employ physical objects, surfaces, and
spaces as tangible embodiments of digital information. These include foreground
interactions with graspable objects and augmented surfaces, exploiting the human

Page 24 October 2012 MIT Media Lab


senses of touch and kinesthesia. We also explore background information displays
that use "ambient media"—light, sound, airflow, and water movement—to
communicate digitally mediated senses of activity and presence at the periphery of
human awareness. We aim to change the "painted bits" of GUIs to "tangible bits,"
taking advantage of the richness of multimodal human senses and skills developed
through our lifetimes of interaction with the physical world.

Alumni Contributors: Yao Wang, Mike Ananny, Scott Brave, Dan Chak, Angela
Chang, Seung-Ho Choo, Keywon Chung, Andrew Dahley, Philipp Frei, Matthew G.
Gorbet, Adam Kumpf, Jean-Baptiste Labrune, Vincent Leclerc, Jae-Chol Lee, Ali
Mazalek, Gian Antonio Pangaro, Amanda Parkes, Ben Piper, Hayes Raffle, Sandia
Ren, Kimiko Ryokai, Victor Su, Brygg Ullmer, Catherine Vaucelle, Craig Wisneski,
Paul Yarin and Jamie B Zigelbaum

110. Topobo Hayes Raffle, Amanda Parkes and Hiroshi Ishii

Topobo is a 3-D constructive assembly system embedded with kinetic memory—the


ability to record and play back physical motion. Unique among modeling systems is
Topobo’s coincident physical input and output behaviors. By snapping together a
combination of passive (static) and active (motorized) components, users can
quickly assemble dynamic, biomorphic forms such as animals and skeletons,
animate those forms by pushing, pulling, and twisting them, and observe the system
repeatedly playing back those motions. For example, a dog can be constructed and
then taught to gesture and walk by twisting its body and legs. The dog will then
repeat those movements.

111. Video Play Sean Follmer, Hayes Raffle and Hiroshi Ishii

Long-distance families are increasingly staying connected with free video


conferencing tools. However, the tools themselves are not designed to
accommodate children's or families' needs. We explore how play can be a means
for communication at a distance. Our Video Play prototypes are simple
video-conferencing applications built with play in mind, creating opportunities for
silliness and open-ended play between adults and young children. They include
simple games, such as Find It, but also shared activities like book reading, where
users' videos are displayed as characters in a story book.

Alumni Contributor: Hayes Raffle

Joseph M. Jacobson—Molecular Machines


How to engineer at the limits of complexity with molecular-scale parts.

112. GeneFab Bram Sterling, Kelly Chang, Joseph M. Jacobson, Peter Carr, Brian Chow,
David Sun Kong, Michael Oh and Sam Hwang

What would you like to "build with biology"? The goal of the GeneFab projects is to
develop technology for the rapid fabrication of large DNA molecules, with
composition specified directly by the user. Our intent is to facilitate the field of
Synthetic Biology as it moves from a focus on single genes to designing complete
biochemical pathways, genetic networks, and more complex systems. Sub-projects
include: DNA error correction, microfluidics for high throughput gene synthesis, and
genome-scale engineering (rE. coli).

Alumni Contributor: Chris Emig

MIT Media Lab October 2012 Page 25


113. NanoFab Kimin Jun, Jaebum Joo and Joseph M. Jacobson

We are developing techniques to use the focused ion beam to program the
fabrication of nanowires based nanostructures and logic devices.

114. Synthetic Joseph M. Jacobson and Kimin Jun


Photosynthesis
We are using nanowires to build structures for synthetic photosynthesis for the solar
generation of liquid fuels.
NEW LISTING

Sepandar Kamvar—Social Computing


How to meaningfully connect people with information.

115. The Dog Salman Ahmad, Zahan Malkani and Sepandar Kamvar
Programming
Dog is a new programming language that makes it easy and intuitive to create
Language
social applications. Dog focuses on a unique and small set of features that allows it
to achieve the power of a full-blown application development framework. One of
NEW LISTING Dog’s key features is built-in support for interacting with people. Dog provides a
natural framework in which both people and computers can be given instructions
and return results. It can perform a long-running computation while also displaying
messages, requesting information, or even sending operations to particular
individuals or groups. By switching between machine and human computation,
developers can create powerful workflows and model complex social processes
without worrying about low-level technical details.

Kent Larson—Changing Places


How new strategies for architectural design, mobility systems, and networked
intelligence can make possible dynamic, evolving places that respond to the
complexities of life.

116. A Market Economy of Dimitris Papanikolaou and Kent Larson


Trips
We are developing a new strategy to create autonomous self-organizing vehicle
sharing systems that uses incentive mechanisms (dynamic pricing) to smooth
demand imbalances, and an interactive graphical user interface to effectively
communicate location-based price information. Prices adjust dynamically to parking
needs, incentivizing users to drive vehicles to stations with too few vehicles, while
discouraging arrivals to stations with excess vehicles. This research explains how
users make decisions in dynamically priced mobility systems, under which
circumstances their actions may make up a self-regulating economy, and how this
economy dynamically performs in different demand patterns. To address these
issues we develop a computational framework using system dynamics, urban
economics, and game theory that models system behavior which will be used to
determine optimum pricing policy, fleet size, and density of parking stations for
having a stable yet profitable system.

Alumni Contributor: William J. Mitchell

Page 26 October 2012 MIT Media Lab


117. AEVITA Kent Larson, William Lark, Jr., Nicholas David Pennycooke and Praveen
Subramani
NEW LISTING
With various private, governmental, and academic institutions researching
autonomous vehicle deployment strategies, the way we think about vehicles must
adapt. But what happens when the driver–the main conduit of information
transaction between the vehicle and its surroundings–is removed? The living EV
system aims to fill this communication void by giving the autonomous vehicle the
means to sense others around it, and react to various stimuli as intuitively as
possible by taking design cues from the living world. The system is comprised of
various types of sensors (computer vision, UWB beacon tracking, sonar) and
actuators (light, sound, mechanical) in order to express recognition of others,
announce intentions, and portray the vehicle’s general state. All systems are built
on the second version of the half-scale CityCar concept vehicle, featuring advanced
mixed-materials (CFRP + Aluminum) and a significantly more modularized
architecture.

118. Autonomous Facades Ronan Lonergan and Kent Larson


for Zero-Energy
We are developing self-powered responsive building envelope components that
Urban Housing
efficiently integrate solar shading and heating, ventilation, privacy control, and
ambient lighting. Dynamic facade modules integrate sensing systems to respond to
both environmental conditions and the activities of people.

119. BTNz! Kent Larson, Andy Lippman, Shaun David Salzberg, Dan Sawada and
Jonathan Speiser
NEW LISTING
We are constructing a lightweight, viral interface consisting of a button and screen
strategically positioned around public spaces to foster social interactions. Users will
be able to upload messages for display on the screen when the button is pushed.
The idea is to explore if a simple, one-dimensional input device and a small output
device can be powerful enough to encourage people to share information about
their shared space and spur joint social activities. The work includes building an
application environment and collecting and analyzing data on the emergent social
activities. Later work may involve tying identity to button-pushers and providing
more context-aware messages to the users.

120. CityCar Ryan C.C. Chin, William Lark, Jr., Nicholas Pennycooke, Praveen Subramani,
and Kent Larson

CityCar is a foldable, electric, sharable, two-passenger vehicle for crowded cities.


Wheel Robots—fully modular in-wheel electric motors—integrate drive motors,
suspension, braking, and steering inside the hub-space of the wheel. This
drive-by-wire system requires only data, power, and mechanical connection to the
chassis. With over 80 degrees of steering freedom, Wheel Robots enable a
zero-turn radius, and without the gasoline-powered engine and drive-train the
CityCar can fold.We are working with Denokinn on an integrated, modular system
for assembly and distribution of the CityCar. Based in Spain's Basque region, the
project is called "Hiriko," which stands for Urban Car. The Hiriko project aims to
create a new, distributed manufacturing system for the CityCar, enabling automotive
suppliers to provide "core" components made of integrated modules such as
in-wheel motor units, battery systems, interiors, vehicle control systems, vehicle
chassis/exoskeleton, and glazing. (Continuing the vision of William J. Mitchell.)

Alumni Contributors: Patrik Kunzler, Philip Liang, William J. Mitchell and Raul-David
Poblano

MIT Media Lab October 2012 Page 27


121. CityCar Folding William Lark, Jr., Nicholas Pennycooke, Ryan C.C. Chin and Kent Larson
Chassis
The CityCar folding chassis is a half-scale working prototype that consists of four
independently controlled in-wheel electric motors, four-bar linkage mechanism for
folding, aluminum exoskeleton, operable front ingress/egress doors,
lithium-nanophosphate battery packs, vehicle controls, and a storage compartment.
The folding chassis can demonstrate compact folding (3:1 ratio compared to
conventional vehicles), omni-directional driving, and wireless remote controls. The
half-scale mock-up explores the material character and potential manufacturing
strategies that will scale to a future full-scale build. (Continuing the vision of William
J. Mitchell.)

Alumni Contributors: William J. Mitchell and Raul-David Poblano

122. CityCar Half-Scale Kent Larson, Nicholas David Pennycooke and Praveen Subramani
Prototype
The CityCar half-scale prototype has been redesigned from the ground up to
incorporate the latest materials and manufacturing processes, sensing
NEW LISTING
technologies, battery systems, and more. This new prototype demonstrates the
functional features of the CityCar at half-scale, including the folding chassis. New
sensing systems have been embedded to enable research into autonomous driving
and parking, while lithium batteries will provide extended range. A new control
system based on microprocessors allows for faster boot time and modularity of the
control system architecture.

123. CityCar Kent Larson, Nicholas David Pennycooke and Praveen Subramani
Ingress-Egress Model
The CityCar Ingress-Egress Model provides a full-scale platform for testing front
ingress and egress for new vehicle types. The platform features three levels of
NEW LISTING
actuation for controlling the movement of seats within a folding vehicle, and can
store custom presets of seat positioning and folding process for different users.

124. CityCar Testing William Lark, Jr., Nicholas Pennycooke, Ryan C.C. Chin and Kent Larson
Platform
The CityCar Testing Platform is a full-scale and modular vehicle that consists of four
independently controlled Wheel Robots, an extruded aluminum frame, battery pack,
driver's interface, and seating for two. Each Wheel Robot is capable of over 120
degrees of steering freedom, thus giving the CityCar chassis omni-directional
driving ability such as sideways parking, zero-radius turning, torque steering, and
variable velocity (in each wheel) steering. This four-wheeler is an experimental
platform for by-wire controls (non-mechanically coupled controls) for the Wheel
Robots, thus allowing for the platform to be controlled by wireless joysticks. The
four-wheeler also allows the CityCar design team to experiment with highly
personalized body/cabin designs. (Continuing the vision of William J. Mitchell.)

Alumni Contributor: William J. Mitchell

125. CityHealth and Indoor Rich Fletcher, Jason Nawyn, and Kent Larson
Environment
The spaces in which we live and work have a strong affect on our physical and
mental health. In addition to obvious effects on physical illness and healing, the
NEW LISTING
quality of our air, the intensity of sound, and the color of our artificial lighting have
also been shown to be important factors that affect cognitive skills, stress levels,
motivation, and work productivity. As a research tool, we have developed small,
wireless, wearable sensors that enable us to simultaneously monitor our
environment and our physiology in real time. By better understanding these
environmental factors, we can design architectural spaces that automatically adapt
to the needs of specific human activities (work/concentration, social relaxation) and
automatically provide for specific health requirements (physical illness, assisted
living).

Page 28 October 2012 MIT Media Lab


126. CityHome Kent Larson, Daniel Smithwick and Hasier Larrea

We demonstrate how the CityHome, which has a very small footprint (840 square
feet), can function as an apartment two to three times that size. This is achieved
through a transformable wall system which integrates furniture, storage, exercise
equipment, lighting, office equipment, and entertainment systems. One potential
scenario for the CityHome is where the bedroom transforms to a home gym, the
living room to a dinner party space for 14 people, a suite for four guests, two
separate office spaces plus a meeting space, or an a open loft space for a large
party. Finally, the kitchen can either be open to the living space, or closed off to be
used as a catering kitchen. Each occupant engages in a process to personalize the
precise design of the wall units according to his or her unique activities and
requirements.

127. CityHome: RoboWall Kent Larson, Hasier Larrea and Carlos Olabarri

The RoboWall is a key module of the CityHome apartment, providing flexibility to


NEW LISTING
the space by moving and transforming, serving as the technology that enables
home reconfiguration. It is a wall that not only moves but also is functional and
smart.

128. Distinguish: Home Kent Larson


Activity Recognition
We propose a recognition system with a user-centric point of view, designed to
make the activity detection processes intelligible to the end-user of the home, and to
permit these users to improve recognition and customize activity models based on
their particular habits and behaviors. Our system, named Distinguish, relies on
high-level, common sense information to create activity models used in recognition.
These models are understandable by end-users and transferable between homes.
Distinguish consists of a common-sense recognition engine that can be modified by
end-users using a novel phone interface.

129. FlickInk Sheng-Ying (Aithne) Pao and Kent Larson

Have you ever been in a teleconference and found it difficult to deliver what you’ve
NEW LISTING
been writing/sketching on paper to the remote participant? FlickInk reinvents
paper/pen-based interaction and enables your notes to jump from paper to physical
surroundings as well as to a remote destination. With a quick flick of the pen, it
allows you to naturally “throw” your handwriting to remote collaborators whenever
you're ready. While the contents are sharable in real time as you write, you maintain
control of what's shared and what's private. Control over authorship and privacy is
enhanced as this paper-based media comes accessible and natural in remote
collaboration. Not only in the context of collaboration, FlickInk also seamlessly
transfers writings/sketches on paper to specified physical objects. We aim to
enhance this novel interaction to enrich highly personalized dynamic experiences
for living-working space in the future.

130. Hiriko CityCar Urban Kent Larson, Chih-Chao Chuang and Ryan C.C. Chin
Feasibility Studies
We are engaging in research that may be incorporated by Denokinn into a feasibility
study for Mobility-on-Demand (MoD) systems in a select number of cities, including
NEW LISTING
Berlin, Barcelona, Malmo, and San Francisco. The goal of the project is to propose
electric mobility car-sharing pilot programs to collaborated cities, which will work
with their existing public infrastructure, use Hiriko CityCar as the primary electric
vehicle, and to study how this system will work with the urbanscape and lifestyle in
different cities.

MIT Media Lab October 2012 Page 29


131. Hiriko CityCar with Ryan C.C. Chin, Kent Larson, William Lark, Jr., Chih-Chao Chuang, Nicholas
Denokinn Pennycooke, and Praveen Subramani

We are working with Denokinn to design and develop an integrated modular system
for assembly and distribution of the CityCar. This project, based in the Basque
region of Spain, will be called the "Hiriko" Project, which stands for Urban Car (Hiri =
urban, Ko = coche or car in Basque). The goal of the Hiriko project is to create a
new, distributed manufacturing system for the CityCar which will enable automotive
suppliers to provide "core" components made of integrated modules such as
in-wheel motor units, battery systems, interiors, vehicle control systems, vehicle
chassis/exoskeleton, and glazing. A full-scale working prototype will be completed
by the end of 2011 with an additional 20 prototypes to be built for testing in 2012.
(Continuing the vision of William J. Mitchell).

Alumni Contributors: William J. Mitchell and Raul-David Poblano

132. Home Genome: Daniel Smithwick and Kent Larson


Mass-Personalized
The home is becoming a center for preventative health care, energy production,
Housing
distributed work, and new forms of learning, entertainment, and communication. We
are developing techniques for capturing and encoding concepts related to human
needs, activities, values, and practices. We are investigating solutions built from an
expanding set of building blocks, or “genes,” which can be combined and
recombined in various ways to create a unique assembly of spaces and systems.
We are developing algorithms to match individuals to design solutions in a process
analogous to that used to match customer profiles to music, movies, and books, as
well as new fabrication and supply-chain technologies for efficient production. We
are exploring how to tap the collective intelligence of distributed groups of people
and companies to create an expanding set of solutions.

133. HomeMaestro Kent Larson, Shaun David Salzberg and Microsoft Research

Current home-automation systems offer very poor user experiences. On a


NEW LISTING
superficial level, they are extremely expensive, difficult to install and use, have
limited functionality, and are often proprietary. Deeper problems include the difficulty
of scripting ever-changing human schedules, managing network security, and
understanding and debugging artificially intelligent systems, as well as dealing with
homes with multiple occupants and preferences. HomeMaestro is a
home-automation system prototype that attempts to address many of these issues.
It consists of two main features: a tangible scripting interface that lets users give
their appliances "muscle memory" by naturally interacting with them, and an "app
store" for quickly and easily downloading functionality to the home. In other words,
HomeMaestro is a platform for intuitively defining home appliance behavior.

134. Human Health Rich Fletcher and Kent Larson


Monitoring in
There is increasing interest in performing physiology monitoring in vehicles. This is
Vehicles
motivated by healthcare trends, aging population, accident prevention, insurance,
and forensic interests. We have developed sensors that can be embedded in a car
NEW LISTING seat and wirelessly measure occupant heart rate parameters and respiration. By
developing algorithms that can detect driver stress, fatigue, or impairment, we can
create better automotive safety systems, controls, and smart lighting for
next-generation smart vehicles.

135. Intelligent Chris Post, Raul-David Poblano, Ryan C.C. Chin, and Kent Larson
Autonomous Parking
In an urban environment, space is a valuable commodity. Current parking structures
Environment
must allow each driver to independently navigate the parking structure to find a
space. As next-generation vehicles turn more and more to drive-by-wire systems,
though, direct human interaction will not be necessary for vehicle movement. An
intelligent parking environment can use drive-by-wire technology to take the burden

Page 30 October 2012 MIT Media Lab


of parking away from the driver, allowing for more efficient allocation of parking
resources to make urban parking less expensive. With central vehicle control, cars
can block each other while parked since the parking environment can move other
vehicles to enabled a blocked vehicle to leave. The parking environment can also
monitor the vehicle charge, allowing intelligent and efficient utilization of charge
stations by moving vehicles to and from charge stations as necessary.

136. Mass-Personalized Kent Larson, Ryan C.C. Chin, Daniel John Smithwick and Tyrone L. Yang
Solutions for the
The housing, mobility, and health needs of the elderly are diverse, but current
Elderly
products and services are generic, disconnected from context, difficult to access
without specialized guidance, and do not anticipate changing life circumstances. We
NEW LISTING are creating a platform for delivering integrated, personalized solutions to help aging
individuals remain healthy, autonomous, productive, and engaged. We are
developing new ways to assess specific individual needs and create
mass-customized solutions. We are also developing new systems and standards for
construction that will enable the delivery of more responsive homes, products, and
services; these standards will make possible cost-effective but sophisticated,
interoperable building components and systems. For instance, daylighting controls
will be coordinated with reconfigurable rooms and will accommodate glare
sensitivity. These construction standards will enable industrial suppliers to easily
upgrade and retrofit homes to better care for home occupants as their needs
change over time.

137. Media Lab Energy Praveen Subramani, Raul-David Poblano, Ryan C.C. Chin, Kent Larson and
and Charging Schneider Electric
Research Station
We are collaborating with Schneider Electric to develop a rapid, high-power
charging station in MIT's Stata Center for researching EV rapid charging and battery
storage systems for the electric grid. The system is built on a 500 kW commercial
uninterruptible power supply (UPS) designed by Schneider Electric and modified by
Media Lab researchers to enable rapid power transfer from lead-acid batteries in
the UPS to lithium-ion batteries onboard an electric vehicle. Research experiments
include: exploration of DC battery banks for intermediate energy storage between
the grid and vehicles; repurposing the lead acid batteries in UPS systems with
lithium-ion cells; and exploration of Level III charging connectors, wireless charging,
and user-interface design for connecting the vehicles to physical infrastructure. The
station is scheduled for completion by early 2012 and will be among the most
advanced battery and EV charging research platforms at a university.

138. MITes+: Portable Kent Larson and Stephen Intille


Wireless Sensors for
MITes (MIT environmental sensors) are low-cost, wireless devices for collecting
Studying Behavior in
data about human behavior and the state of the environment. Nine versions of
Natural Settings MITes have now been developed, including MITes for people movement (3-axis
accelerometers), object movement (2-axis accelerometers), temperature, light
levels, indoor location, ultra-violet light exposure, heart rate, haptic output, and
electrical current flow. MITes are being deployed to study human behavior in natural
setting. We are also developing activity recognition algorithms using MITes data for
health and energy applications. (a House_n Research Consortium Initiative funded
by the National Science Foundation)

Alumni Contributors: Randy Rockinson and Emmanuel Munguia Tapia

139. Mobility on Demand Kent Larson, Ryan C.C. Chin, Chih-Chao Chuang, William Lark, Jr., Brandon
Systems Phillip Martin-Anderson and SiZhi Zhou

Mobility on Demand (MoD) systems are fleets of lightweight electric vehicles at


strategically distributed electrical charging stations throughout a city. MoD systems
solve the “first and last mile” problem of public transit, providing mobility between
transit station and home/workplace. Users swipe a membership card at the MoD

MIT Media Lab October 2012 Page 31


station to access vehicles, which can be driven to any other station (one-way
rental). The Vélib' system of 20,000+ shared bicycles in Paris is the largest and
most popular one-way rental system in the world. MoD systems incorporate
intelligent fleet management through sensor networks, pattern recognition, and
dynamic pricing, and the benefits of Smart Grid technologies including intelligent
electrical charging (including rapid charging), vehicle-to-grid (V2G), and surplus
energy storage for renewable power generation and peak shaving for the local
utility. We have designed three MoD vehicles: CityCar, RoboScooter, and
GreenWheel bicycle. (Continuing the vision of William J. Mitchell.)

140. Open-Source Kent Larson


Furniture
We are exploring the use of parametric design tools and CNC fabrication
technology to enable lay people to navigate through a complex furniture and
cabinetry design process for office and residential applications. We are also
exploring the integration of sensors, lighting, and actuators into furniture to create
objects that are responsive to human activity.

141. Operator Kent Larson and Brandon Phillip Martin-Anderson

Operator is an AI agent that keeps tabs on how things are running around town, and
NEW LISTING
tells you how to get where you want to go in the least effortful of ways.

142. Participatory Rich Fletcher and Kent Larson


Environmental
Air and water pollution are well-known concerns in cities throughout the world.
Sensing for
However, communities often lack practical tools to measure and record pollution
Communities levels, and thus are often powerless to motivate policy change or government
action. Although some government-funded pollution monitors do exist, they are
NEW LISTING sparsely located, and many large national and local governments fail to disclose this
environmental data in areas where pollution is most prevalent. In order to address
this public health need, we have been developing very low-cost, ultra low-power
environmental sensors for air, soil, and water, that enable communities to easily
sample their environment and upload data to their mobile phone and an online map.
The ability to perform fine resolution, large-scale environmental monitoring not only
empowers communities to enact new policies, but also serves as a public resource
for city health services, traffic control, and general urban design.

143. PlaceLab and BoxLab Jason Nawyn, Stephen Intille and Kent Larson

The PlaceLab was a highly instrumented, apartment-scale, shared research facility


where new technologies and design concepts were tested and evaluated in the
context of everyday living. It was used by researchers until 2008 to collect
fine-grained human behavior and environmental data, and to systematically test and
evaluate strategies and technologies for the home in a natural setting with volunteer
occupants. BoxLab is a portable version with many of the data collection capabilities
of PlaceLab. BoxLab can be deployed in any home or workplace. (A House_n
Research Consortium project funded by the National Science Foundation.)

Alumni Contributors: Jennifer Suzanne Beaudin, Manu Gupta, Pallavi Kaushik,


Aydin Oztoprak, Randy Rockinson and Emmanuel Munguia Tapia

144. Powersuit: Jennifer Broutin Farah, Kent Larson


Micro-Energy
The PowerSuit is a micro-energy harnessing shirt that functions based on
Harvesting
temperature differentials between a person's skin and the outside environment. The
skin becomes an activated landscape that can be used for micro-power generation.
NEW LISTING The idea is to consider small increments of energy as useful toward a specific
purpose such as lighting safety LEDs while running at night time on cold days.
Fundamentally, this is a shift in how people consider energy. Rather than constantly
striving for tools and devices that are more powerful and less energy efficient, why

Page 32 October 2012 MIT Media Lab


not consider using small amounts of energy not typically utilized toward more
efficient devices such as LED lighting. This project is the beginning of an exploration
in materials structures that yield micro-power through temperature differentials.
Imagine a material impregnated with this technology that could be applied to
surfaces that will consistently harness small amounts of energy.

145. Robotic Facade / Harrison Hall, Kent Larson and Shaun David Salzberg
Personalized Sunlight
The robotic façade is conceived as a mass-customizable module that combines
solar control, heating, cooling, ventilation, and other functions to serve an urban
NEW LISTING
apartment. It attaches to the building “chassis” with standardized power, data, and
mechanical attachments to simplify field installation and dramatically increase
energy performance. The design makes use of an articulating mirror to direct shafts
of sunlight to precise points in the apartment interior. Tiny, low-cost, easily installed
wireless sensors and activity recognition algorithms allow occupants to use a mobile
phone interface to map activities of daily living to personalized sunlight positions.
We are also developing strategies to control LED luminaires to turn off, dim, or tune
the lighting to more energy-efficient spectra in response to the location, activities,
and paths of the occupants.

Alumni Contributor: Ronan Patrick Lonergan

146. SeedPod: Interactive Jennifer Broutin Farah, Colin Carew, Rich Fletcher and Kent Larson
Farming Module
SeedPod is an interactive farming system that assists everyday people in reliably
producing healthy food in urban areas. SeedPod is a scalable, modular system
NEW LISTING
augmented by technology such as monitoring sensors, networked components, and
smart mobile applications to facilitate ease and a deeper understanding of the
process through which aeroponic vegetables are grown. We believe that SeedPod
serves as a platform for closing the loop between people and food.

147. Shortest Path Tree Kent Larson and Brandon Phillip Martin-Anderson

Shortest Path Tree is an experimental way to interact with an algorithmic multimodal


NEW LISTING
trip planner. It emphasizes how the shape of the city interacts with the planning
process embedded in every mobility decision.

148. Smart Customization Ryan C. C. Chin, Daniel Smithwick and Kent Larson
of Men's Dress
Sanders Consulting’s 2005 ground-breaking research, “Why Mass Customization is
Shirts: A Study on
the Ultimate Lean Manufacturing System” showed that the best standard
Environmental Impact mass-production practices when framed from the point of view of the entire product
lifecycle–from raw material production to point of purchase–was actually very
inefficient and indeed wasteful in terms of energy, material use, and time. Our
research examines the environmental impacts when applying mass customization
methodologies to men's custom dress shirts. This study traces the production,
distribution, sale, and customer-use of the product, in order to discover key areas of
waste and opportunities for improvement. Our comparative study examines not only
the energy and carbon emissions due production and distribution, but also customer
acquisition and use, by using RFID tag technology to track shirt utilization of over 20
subjects over a three-month period.

149. Smart DC MicroGrid Kent Larson and Christophe Yoh Charles Meyers

Given the increasing development of renewable energy, its integration into the
NEW LISTING
electric distribution grid needs to be addressed. In addition, the majority of
household appliances operate on DC. The aim of this project is to develop a
microgrid capable of addressing these issues, while drawing on a smart control
system.

MIT Media Lab October 2012 Page 33


150. smartCharge Praveen Subramani, Sean Cockey, Guangyan Gao, Jean Martin and Kent
Larson
NEW LISTING
With the next generation of lightweight electric vehicles being deployed in vehicle
sharing systems across the world, there is a growing need for smarter charging
infrastructure. smartCharge is the next generation of intelligent charging
infrastructure for EVs in cities. Specifically optimized for EV sharing systems, the
smartCharge platform integrates secure locking, high current vehicle charging (up to
36A), and data transfer into a single connector. Its concentric connector design
allows users to insert the plug from any angle, allowing them to quickly lock and
charge the rented vehicle without wasting time and space with separate docking
and charging systems. The system connects vehicles to a smart charging post that
integrates ambient LED lighting to provide feedback to users on the current state of
charge of the vehicle, its availability status, and maintenance needs. The
connection system is universally designed to function with electric bicycles,
scooters, cars, and other lightweight EVs.

151. Spike: Social Cycling Kent Larson and Sandra Richter

Spike is a social cycling application developed for bike-sharing programs. The


NEW LISTING
application persuades urban dwellers to bike together, increasing the perceived
level of safety. Social deals and benefits which can only be redeemed together
motivate the behavior change. Frequent Biker Miles sustain the behavior. An
essential feature is real-time information on where the users of the social network
are currently biking or when they are planning to bike, to facilitate bike dates.

152. Wheel Robots William Lark, Jr., Nicholas Pennycooke, Ryan C.C. Chin and Kent Larson

The mechanical components that make driving a vehicle possible (acceleration,


braking, steering, springing) are located inside the space of the wheel, forming
independent wheel robots and freeing the vehicular space of these components.
Connected to the chassis are simple mechanical, power, and data connections,
allowing for the wheel robots to plug in to a vehicle simply and quickly. A CPU in the
vehicle provides the input necessary for driving according to the vehicle's
dimensions or loading condition. The design of the wheel robots provides optimal
contact patch placement, lower unsprung and rotational mass, omnidirectional
steering, great space savings, and modularity, as the wheel robots can function
appropriately on vehicles of different dimensions and weight. (Continuing the vision
of William J. Mitchell.)

Alumni Contributors: Patrik Kunzler, Philip Liang and William J. Mitchell

153. WorkLife Jarmo Suominen and Kent Larson

The nature of work is rapidly changing, but designers have a poor understanding of
how places of work affect interaction, creativity, and productivity. We are using
mobile phones that ask context-triggered questions and sensors in workplaces to
collect information about how spaces are used and how space influences feelings
such as productivity and creativity. A pilot study took place at the Steelcase
headquarters in 2007, and in the offices of EGO, Inc. in Helsinki, Finland 2009. (A
House_n Research Consortium project funded by TEKES.)

Alumni Contributor: Kenneth Cheung

Page 34 October 2012 MIT Media Lab


Henry Lieberman—Software Agents
How software can act as an assistant to the user rather than a tool, by learning from
interaction and by proactively anticipating the user's needs.

154. Common-Sense Henry Lieberman


Reasoning for
A long-standing dream of artificial intelligence has been to put common-sense
Interactive
knowledge into computers–enabling machines to reason about everyday life. Some
Applications projects, such as Cyc, have begun to amass large collections of such knowledge.
However, it is widely assumed that the use of common sense in interactive
applications will remain impractical for years, until these collections can be
considered sufficiently complete, and common-sense reasoning sufficiently robust.
Recently we have had some success in applying common-sense knowledge in a
number of intelligent interface agents, despite the admittedly spotty coverage and
unreliable inference of today's common-sense knowledge systems.

Alumni Contributors: Xinyu H. Liu and Push Singh

155. CommonConsensus: Henry Lieberman and Dustin Smith


A Game for
We have developed, Common Consensus: a fun, self-sustaining web-based game,
Collecting
that both collects and validates Commonsense knowledge about everyday goals.
Commonsense Goals Goals are a key element of commonsense knowledge; in many of our inferface
agents, we need to recognize goals from user actions (plan recognition), and
generate sequences of actions that implement goals (planning). We also often need
to answer more general questions about the situations in which goals occur, such
as when and where a particular goal might be likely, or how long it is likely to take to
achieve.

Alumni Contributor: Push Singh

156. E-Commerce When Henry Lieberman


Things Go Wrong
One of the biggest challenges for the digital economy is what to do when things go
wrong. Orders get misplaced, numbers mistyped, requests misunderstood: then
what? Consumers are frustrated by long waits on hold, misplaced receipts, and
delays to problem resolution; companies are frustrated by the cost of high-quality
customer service. Online companies want customers’ trust, and how a company
handles problems directly impacts that. We explore how software agents and other
technologies can help with this issue. Borrowing ideas from software debugging, we
can have agents help to automate record-keeping and retrieval, track
dependencies, and provide visualization of processes. Diagnostic problem-solving
can generate hypotheses about causes of errors, and seek information that allows
hypotheses to be tested. Agents act on behalf of both the consumer and the vendor
to resolve problems more quickly and at lower cost.

157. Goal-Oriented Henry Lieberman and Pei-Yu Chi


Interfaces for
Consumer electronics devices are becoming more complicated, intimidating users.
Consumer
These devices do not know anything about everyday life or human goals, and they
Electronics show irrelevant menus and options. Using common-sense reasoning, we are
building a system, Roadie, with knowledge about the user's intentions; this
knowledge will help the device to display relevant information to reach the user's
goal. For example, an amplifier should suggest a play option when a new
instrument is connected, or a DVD player suggest a sound configuration based on
the movie it is playing. This will lead to more human-like interactions with these

MIT Media Lab October 2012 Page 35


devices. We have constructed a Roadie interface to real consumer electronics
devices: a television, set top box, and smart phone. The devices communicate over
Wi-Fi, and use the UPnP protocols.

Alumni Contributor: Jose H. Espinosa

158. Goal-Oriented Henry Lieberman, Karthik Dinakar, Christopher Fry, Dustin Arthur Smith, Hal
Interfaces for Mobile Abelson and Venky Raju
Phones
Contemporary mobile phones provide a vast array of capabilities in so-called
"apps," but currently each app lives in its own little world, with its own interface.
NEW LISTING Apps are usually unable to communicate with each other and unable to cooperate
to meet users' needs. This project intends to enable end-users to "program" their
phones using natural language and speech recognition to perform complex tasks. A
user, for example, could say: "Send the song I play most often to Bill." The phone
should realize that an MP3 player holds songs, and that the MP3 app has a function
to order songs by play frequency. It should know how to send a file to another user,
and how to look up the user's contact information. We use state-of-the art natural
language understanding, commonsense reasoning, and a partial-order planner.

159. Graphical Interfaces Henry Lieberman


for Software
This project explores how modern graphical interface techniques and explicit
Visualization and
support for the user's problem-solving activities can make more productive
Debugging interfaces for debugging, which accounts for half the cost of software development.
Animated representations of code, a reversible control structure, and instant
connections between code and graphical output are some of the techniques used.

160. Human Goal Network Henry Lieberman and Dustin Smith

What motivates people? What changes do people want in the world? We approach
questions of this kind by mining goals and plans from text-based websites: wikiHow,
eHow, 43things, to-do lists, and commonsense knowledge bases. 43things tells us
about people's long term ambitions. How-to instructions and to-do lists tell us about
everyday activities. We've analyzed the corpus to find out which goals are most
popular, controversial, and concealed. The resulting goal network can be used for
plan recognition, natural language understanding, and building intelligent interfaces
that understand why they are being used. Come by and learn about how you can
use this knowledge about actions/goals, their properties (cost, duration, location)
and their relations in your own applications.

161. Improving flexibility Henry Lieberman and Dustin Arthur Smith


of Natural Language
A major problem for natural language interfaces is their inability to handle text
Interfaces by
whose meaning depends in part on context. If a user asks his car radio to play "a
accommodating fast song", or his calendar to schedule "a short meeting," the interpreter would have
vague and to accommodate vagueness and ambiguity to figure out what he meant based on
ambiguous input what he said. For it to understand what songs or events the speaker intended, it
must make decisions that depend on assumed common knowledge about the world
and language. Our research presents two approaches for reducing uncertainty in
NEW LISTING
natural language interfaces, by modeling interpretation as a plan recognition
problem.

162. Learning Common Henry Lieberman, Ned Burns and Li Bian


Sense in a Second
It's well known that living in a foreign country dramatically improves the
Language
effectiveness of learning a second language over classroom study alone. This is
likely because people make associations with the foreign language as they see and
participate in everyday life activities. We are designing language-teaching
sequences for a sensor-equipped residence that can detect user interaction with
household objects. We use our common-sense knowledge base and reasoning

Page 36 October 2012 MIT Media Lab


tools to construct teaching sequences, wholly in the target language, of sentences
and question-answering interactions that gradually improve the learner's language
competence. For example, the first time the user sits in a chair, the system
responds with the foreign-language word for "chair," and later with statements and
questions such as, "You sit in the chair" (complete sentence), "You sat in the chair"
(tenses), "What is the chair made of?" (question, materials), or "Why are you sitting
in the chair?" (goals, plans).

163. Multi-Lingual Hyemin Chung, Jaewoo Chung, Wonsik Kim, Sung Hyon Myaeng and Walter
ConceptNet Bender

A ConceptNet in English is already established and working well. We are now


attempting to expand it to other languages and cultures. This project is an extended
ConceptNet with Korean common sense, which is fundamentally different from
English. Through this project, we can learn how to expand the ConceptNet into
other languages and how to connect them. By connecting English and Korean
ConceptNets, we are hoping not only to see cultural or linguistic differences, but
also to solve problems such as the ambiguity of multivocal words, which were
difficult to solve with only one ConceptNet.

164. Multilingual Common Aparecido Fabiano Pinatti de Carvalho, Jesus Savage Carmona, Marie
Sense Tsutsumi, Junia Anacleto, Henry Lieberman, Jason Alonso, Kenneth Arnold,
Robert Speer, Vania Paula de Almeida and Veronica Arreola Rios

This project aims to collect and reason over common-sense knowledge in


languages other than English. We have collected large bodies of common-sense
knowledge in Portuguese and Korean, and we are expanding to other languages
such as Spanish, Dutch, and Italian. We can use techniques based on
AnalogySpace to discover correlations between languages, enabling our knowledge
bases in different languages to learn from each other.

Alumni Contributors: Hyemin Chung, Jose H. Espinosa, Wonsik Kim and Yu-Te
Shen

165. Navigating in Very Henry Lieberman


Large Display Spaces
How would you browse a VERY large display space, such as a street map of the
entire world? The traditional solution is zoom and pan, but these operations have
drawbacks that have gone unchallenged for decades. Shifting attention loses the
wider context, leading to that "lost in hyperspace" feeling. We are exploring
alternative solutions, such as a new technique that allows zooming and panning in
multiple translucent layers.

166. Open Interpreter Henry Lieberman and Dustin Arthur Smith

Language interpretation requires going beyond the words to derive what the
NEW LISTING
speaker meant–cooperatively making 'leaps of faith' and putting forth assumptions
that can later be revised or redacted. Current natural language interfaces are
opaque; when interpretation goes wrong–which it inevitably does–the human is left
without recourse. The Open Interpreter project brings the assumptions involved with
interpreting English event descriptions into the user interface, so people can
participate in teaching the computer to derive the same common-sense
assumptions that they expected. We show the immediate applications for an
intelligent calendaring application.

MIT Media Lab October 2012 Page 37


167. ProcedureSpace: Henry Lieberman and Kenneth C. Arnold
Managing Informality
Computers usually require us to be precise about what we want them to do and
by Example
how, but humans find it hard to be so formal. If we gave computers formal examples
of our informal instructions, maybe they could learn to relate ordinary users' natural
instructions with the specifications, code, and tests with which they are comfortable.
Zones and ProcedureSpace are examples of this. Zones is a code search interface
that connects code with comments about its purpose. Completed searches become
annotations, so the system learns by example. The backend, ProcedureSpace,
finds code for a purpose comment (or vice versa) by relating words and phrases to
code characteristics and natural language background knowledge. Users of the
system were able describe what they wanted in their own words, and often found
that the system gave them helpful code.

168. Programming in Henry Lieberman and Moin Ahmad


Natural Language
We want to build programming systems that can converse with their users to build
computer programs. Such systems will enable users without programming expertise
to write programs using natural language. The text-based, virtual-world
environments called the MOO (multi-user, object-oriented Dungeons and Dragons)
allow their users to build objects and give them simple, interactive, text-based
behaviors. These behaviors allow other participants in the environment to interact
with those objects by invoking actions and receiving text messages. Through our
natural-language dialogue system, the beginning programmer will be able to
describe objects and the messages in MOO environments.

169. Raconteur: From Henry Lieberman and Pei-Yu Chi


Chat to Stories
Raconteur is a story-editing system for conversational storytelling that provides
intelligent assistance in illustrating a story with photos and videos from an annotated
media library. It performs natural language processing on a text chat between two
or more participants, and recommends appropriate items from a personal media
library to illustrate a story. A large common-sense knowledge base and a novel
common-sense inference technique are used to find relevant media materials to
match the story intent in a way that goes beyond keyword matching or word
co-occurrence based techniques. Common-sense inference can identify
larger-scale story patterns such as expectation violation or conflict and resolution,
and helps a storyteller to chat and brainstorm his personal stories with a friend.

170. Relational Analogies Henry Lieberman and Jayant Krishnamurthy


in Semantic Networks
Analogy is a powerful comparison mechanism, commonly thought to be central to
human problem solving. Analogies like "an atom is like the solar system" enable
people to effectively transfer knowledge to new domains. Can we enable computers
to do similar comparisons? Prior work on analogy (structure mapping) provides
guidance about the nature of analogies, but implementations of these theories are
inefficient and brittle. We are working on a new analogy mechanism that uses
instance learning to make robust, efficient comparisons.

171. Ruminati: Tackling Karthik Dinakar, Henry Lieberman, and Birago Jones
Cyberbullying with
The scourge of cyberbullying has assumed worrisome proportions with an
Computational
ever-increasing number of adolescents admitting to having dealt with it either as a
Empathy victim or bystander. Anonymity and the lack of meaningful supervision in the
electronic medium are two factors that have exacerbated this social menace. This
project explores computational methods from natural language processing and
reflective user interfaces to alleviate this problem.

Page 38 October 2012 MIT Media Lab


172. Storied Navigation Henry Lieberman

Today, people can tell stories by composing, manipulating, and sequencing


individual media artifacts using digital technologies. However, these tools offer little
help in developing a story's plot. Specifically, when a user tries to construct her
story based on a collection of individual media elements (videos, audio samples),
current technological tools do not provide helpful information about the possible
narratives that these pieces can form. Storied Navigation is a novel approach to this
problem; media sequences are tagged with free-text annotations and stored as a
collection. To tell a story, the user inputs a free-text sentence and the system
suggests possible segments for a storied succession. This process iterates
progressively, helping the user to explore the domain of possible stories. The
system achieves the association between the input and the segments' annotations
using reasoning techniques that exploit the WordNet semantic network and
common-sense reasoning technology.

Alumni Contributors: Barbara Barry, Glorianna Davenport and edshen

173. Time Out: Reflective Birago Jones, Henry Lieberman and Karthik Dinakar
User Interface for
Time Out is a experimental user interface system for addressing cyberbullying on
Social Networks
social networks. A Reflective User Interface (RUI) is a novel concept to help users
consider the possible consequences of their online behavior, and assist in
intervention or mitigation of potentially negative/harmful actions.

Andy Lippman—Viral Spaces


How to make scalable systems that enhance how we learn from and experience real
spaces.

174. Air Mobs Andy Lippman, Henry Holtzman and Eyal Toledano

Air Mobs is a community-based P2P cross-operator WiFi tethering market. It


NEW LISTING
provides network connectivity when one device has no available Internet connection
or roaming costs are too high, and another device has excellent network
connectivity and a full battery. Air Mobs barters air time between different mobile
phone users using WiFi tethering to locate and establish an Internet link though
another device that has a good 3G connection. The member that provides the link
will gain airtime credit that can be used when he is not connected. Air Mobs creates
incentive via a secondary market–a user will be willing to share his data connection
since he will get data in return. The synergetic value emerges when different users
on different mobile operators provide network access to each other, compensating
for each operator's out-of-coverage areas.

175. AudioFile Andy Lippman, Travis Rich and Stephanie Su

AudioFile overlays imperceptible tones on standard audio tracks to embed digital


NEW LISTING
information that can be decoded by standard mobile devices. AudioFile lets users
explore their media more deeply by granting them access to a new channel of
communication. The project creates sound that is simultaneously meaningful to
humans and machines. Movie tracks can be annotated with actor details, songs can
be annotated with artist information, or public announcements can be infused with
targeted, meaningful data.

MIT Media Lab October 2012 Page 39


176. Augmented Matter Andy Lippman and Travis Rich

We explore techniques to integrate digital codes into physical objects. Spanning


NEW LISTING
both the hard and the soft, this work entails incorporating textures, patterns, and
passive electronic elements into the surfaces of objects in a coded manner. The
codes and the detectors for those codes represent unique research opportunities.
Our motivation is to make opaque technologies things that teach and expose
information about themselves through the sensing technologies we already, or
foreseeably could, carry on us. In addition, we envision making machines that know
what they are doing and what they are connected to based on the unique properties
of their encoded material.

177. Barter: A Dawei Shen, Marshall Van Alstyne and Andrew Lippman
Market-Incented
Creative and productive information interchange in organizations is often stymied by
Wisdom Exchange
a perverse incentive setting among the members. We transform that competition
into a positive exchange by using market principles. Specifically, we apply
innovative market mechanisms to construct incentives while still encouraging
pro-social behaviors. Barter includes means to enhance knowledge sharing,
innovation creation, and productivity. It is being tested at MIT and in three sponsor
companies and is becoming available as a readily installable package. We will
measure the results and test the effectiveness of an information market in
addressing organizational challenges. We are learning that transactions in rich
markets can become an organizing principle among people potentially as strong as
social networks.

178. Brin.gy: What Brings Henry Holtzman, Andy Lippman and Polychronis Ypodimatopoulos
Us Together
We allow people to form dynamic groups focused on topics that emerge
serendipitously during everyday life. They can be long-lived or flower for a short
NEW LISTING
time. Examples include people interested in buying the same product, those with
similar expertise, those in the same location, or any collection of such attributes. We
call this the Human Discovery Protocol (HDP). Similar to how computers follow
well-established protocols like DNS in order to find other computers that carry
desired information, HDP presents an open protocol for people to announce bits of
information about themselves, and have them aggregated and returned back in the
form of a group of people that match against the user’s specified criteria. We
experiment with a web-based implementation (brin.gy) that allows users to join and
communicate with groups of people based on their location, profile information, and
items they may want to buy or sell.

179. BTNz! Kent Larson, Andy Lippman, Shaun David Salzberg, Dan Sawada and
Jonathan Speiser
NEW LISTING
We are constructing a lightweight, viral interface consisting of a button and screen
strategically positioned around public spaces to foster social interactions. Users will
be able to upload messages for display on the screen when the button is pushed.
The idea is to explore if a simple, one-dimensional input device and a small output
device can be powerful enough to encourage people to share information about
their shared space and spur joint social activities. The work includes building an
application environment and collecting and analyzing data on the emergent social
activities. Later work may involve tying identity to button-pushers and providing
more context-aware messages to the users.

180. CoCam Henry Holtzman, Andy Lippman, Dan Sawada and Eyal Toledano

Collaborating and media creation are difficult tasks, both for people and for network
NEW LISTING
architectures. CoCam is a self-organizing network for real-time camera image
collaboration. Like all camera apps, just point and shoot; CoCam then automatically
joins other media creators into a network of collaborators. Network discovery,
creation, grouping, joining, and leaving is done automatically in the background,

Page 40 October 2012 MIT Media Lab


letting users focus on participation in an event. We use local P2P middleware and a
3G negotiation service to create these networks for real-time media sharing.
CoCam also provides multiple views that make the media experience more
exciting–such as appearing to be in multiple places at the same time. The media is
immediately distributed and replicated in multiple peers, thus if a camera phone is
confiscated other users have copies of the images.

181. CoSync Henry Holtzman, Andy Lippman and Eyal Toledano

CoSync builds the ability to create and act jointly into mobile devices . This mirrors
NEW LISTING
the way we as a society act both individually and in concert. CoSync device ecology
combines multiple stand-alone devices and controls them opportunistically as if they
are one distributed, or diffuse, device at the user’s fingertips. CoSync includes a
programming interface that allows time synchronized coordination at a granularity
that will permit watching a movie on one device and hearing the sound from
another. The open API encourages an ever growing set of such finely coordinated
applications.

182. Electric Price Tags Andy Lippman, Matthew Blackshaw and Rick Borovoy

Electric Price Tags are a realization of a mobile system that is linked to technology
in physical space. The underlying theme is that being mobile can mean far more
than focusing on a portable device—it can be the use of that device to unlock data
and technology embedded in the environment. In its current version, users can
reconfigure the price tags on a store shelf to display a desired metric (e.g., price,
unit price, or calories). While this information is present on the boxes of the items for
sale, comparisons would require individual analysis of each box. The visualization
provided by Electric Price Tags allows users to view and filter information in
physical space in ways that was previously possible only online.

183. Geo.gy: Location Andy Lippman and Polychronis Ypodimatopoulos


Shortener
Were you ever in the middle of a conversation and needed to share your location
with the other party? Geo.gy is a location shortener service. It allows you to easily
NEW LISTING
share your location with your peers by encoding it in a short URL which we call a
"geolink". It is platform-independent, based on HTML5, so you can use any device
with a modern browser to generate a geolink, simply by visiting the project's page.
There are no user accounts so geolinks remain anonymous. You can use Geo.gy to
add location context to a post, SMS, anything you want decorated with location
context.

184. Line of Sound Grace Rusi Woo, Rick Borovoy and Andy Lippman

We show how data can be used to deliver sound information only in the direction in
which one looks. The demonstration is done using two 55-inch screens which are
transmitting both human and machine relevant information. Each screen is used to
show a video which flashes a single bit indicator which transmits to a camera
mounted on headphones. This is used to distinguish between the two screens, and
to correlate an audio track to the video track.

MIT Media Lab October 2012 Page 41


185. LipSync Grace Rusi Woo (Pixels.IO), Eyal Toledano, Szymon Jakubczak (Pixels.IO)

LipSync is an interactive broadcast of famous monologues available in many


NEW LISTING
different languages to multiple users at the same time. Users direct their
smartphones to "tune in" to their desired language, either audio or closed
captioning, by simply pointing them at the relevant part of the screen. This allows
intuitive selection of the speaker and language preference. The precise
synchronization between the video and the audio streams creates a seamless
experience, where the user's natural motions give voice to moving lips. Multiple
people can listen to different audio streams associated with the video toward which
the mobile phone camera is pointed. This demonstrates with two technologies
developed in the Viral Spaces group: CoSync and VRCodes.

186. Mapping Community Jonathan Speiser and Joi Ito


Learning
We are beginning a program to work with a community such as Detroit to
understand grassroots innovation in a realworld context. Both the problems and the
NEW LISTING
solutions are identified and developed by local people. They range from local
communications, to matters of water and soil safety, to composting. The goal is to
spur an innovation process that amplifies local resources, is rooted in local
knowledge, and teaches both guests and residents a new process of innovation. By
creating a temporal and spatial map of the activities ongoing in Detroit, we provide a
window into the context of the innovation process.

187. NewsFlash Andy Lippman and Grace Rusi Woo

NewsFlash is a social way to experience the global and local range of current
NEW LISTING
events. People see a tapestry of newspaper front-pages. The headlines and main
photos tell part of the story, NewsFlash tells you the rest. People point their phones
at a headline or picture of interest to bring up a feed of the article text from that
given paper. The data emanates from the screen and and is captured by a cell
phone camera–any number of people can see it at once and discuss the panoply of
ongoing events. NewsFlash creates a local space that is simultaneously interactive
and provocative. We hope it gets people talking.

188. Peddl Andy Lippman, Hiroshi Ishii, Matthew Blackshaw, Anthony DeVincenzi and
David Lakatos
NEW LISTING
Peddl creates a localized, perfect market. All offers are broadcasts, allowing users
to spot trends, bargains, and opportunities. With GPS- and Internet-enabled mobile
devices in almost every pocket, we see an opportunity for a new type of
marketplace which takes into account your physical location, availability, and open
negotiation. Like other real-time activities, we are exploring transactions as an
organizing principle among people that, like Barter, may be strong, rich, and
long-lived.

189. Point & Shoot Data Andy Lippman and Travis Rich

Point & Shoot Data explores the use of visible light as a wireless communication
NEW LISTING
medium for mobile devices. A snap-on case allows users to send messages to
other mobile devices based on directionality and proximity. No email address,
phone number, or account login is needed, just point and shoot your messages!
The project enables infrastructure-free, scalable, proximity-based communication
between two mobile devices.

Alumni Contributors: Samuel Luescher and Shaun David Salzberg

Page 42 October 2012 MIT Media Lab


190. Reach Andy Lippman, Boris G Kizelshteyn and Rick Borovoy

Reach merges inherently local communications with user requests or offers of


services. It is built atop data from services users already use, like Facebook and
Google Latitude. Reach is intended to demonstrate a flexible, attractive mobile
interface that allows one to discover "interesting" aspects of the environment and to
call upon services as needed. These can range from a broadcast offer to serve as a
triage medic, to a way to share a cab or get help for a technical service problem like
plugging into a video projector.

191. Recompose Hiroshi Ishii, Matthew Blackshaw, Anthony DeVincenzi and David Lakatos

Human beings have long shaped the physical environment to reflect designs of form
and function. As an instrument of control, the human hand remains the most
fundamental interface for affecting the material world. In the wake of the digital
revolution, this is changing, bringing us to reexamine tangible interfaces. What if we
could now dynamically reshape, redesign, and restructure our environment using
the functional nature of digital tools? To address this, we present Recompose, a
framework allowing direct and gestural manipulation of our physical environment.
Recompose complements the highly precise, yet concentrated affordance of direct
manipulation with a set of gestures, allowing functional manipulation of an actuated
surface.

192. Social Andy Lippman, Kwan Lee, Dawei Shen, Eric Shyu and Phumpong
Transactions/Open Watanaprakornkul
Transactions
Social Transactions is an application that allows communities of consumers to
collaboratively sense the market from mobile devices, enabling more informed
financial decisions in a geo-local and timely context. The mobile application not only
allows users to perform transactions, but also to inform, share, and purchase in
groups at desired times. It could, for example, help people connect opportunistically
in a local area to make group purchases, pick up an item for a friend, or perform
reverse auctions. Our framework is an Open Transaction Network that enables
applications from restaurant menu recommendations to electronics purchases. We
tested this with MIT's TechCASH payment system to investigate whether shared
social transactions could provide just-in-time influences to change behaviors.

193. T(ether) Hiroshi Ishii, Andy Lippman, Matthew Blackshaw and David Lakatos

T(ether) is a novel spatially aware display that supports intuitive interaction with
volumetric data. The display acts as a window affording users a perspective view of
three- dimensional data through tracking of head position and orientation. T(ether)
creates a 1:1 mapping between real and virtual coordinate space allowing
immersive exploration of the joint domain. Our system creates a shared workspace
in which co-located or remote users can collaborate in both the real and virtual
worlds. The system allows input through capacitive touch on the display and a
motion-tracked glove. When placed behind the display, the user’s hand extends into
the virtual world, enabling the user to interact with objects directly.

MIT Media Lab October 2012 Page 43


194. T+1 Dawei Shen, Rick Borovoy and Andrew Lippman

T+1 is an application that creates an iterative structure to help groups organize their
interests and schedules. Users of T+1 receive instructions and send their personal
information through mobile devices at discretized time steps, orchestrated by a
unique, adaptive scheduling engine. At each time-step t, T+1 takes as inputs
several relevant factors of human interactions, such as participants' interests,
opinions, locations, and partner matching schedules. It then computes and
optimizes the structure and format of a group interactions for the next interval. T+1
facilitates consensus formation, better group dynamics, and more engaging user
experiences by using a clearly visible and comprehensible process. We are
planning to deploy the platform in both academic and political discussion settings,
analyze how user opinions and interests evolve in time to understand its efficacy.

195. The Glass Henry Holtzman, Andy Lippman, Matthew Blackshaw, Jon Ferguson,
Infrastructure Catherine Havasi, Julia Ma, Daniel Schultz and Polychronis Ypodimatopoulos

This project builds a social, place-based information window into the Media Lab
using 30 touch-sensitive screens strategically placed throughout the physical
complex and at sponsor sites. The idea is get people to talk among themselves
about the work that they jointly explore in a public place. We present Lab projects
as dynamically connected sets of "charms" that visitors can save, trade, and
explore. The GI demonstrates a framework for an open, integrated IT system and
shows new uses for it.

Alumni Contributors: Rick Borovoy, Greg Elliott and Boris Grigory Kizelshteyn

196. VR Codes Andy Lippman and Grace Woo

VR Codes are dynamic data invisibly hidden in television and graphic displays.
NEW LISTING
They allow the display to present simultaneously visual information in an unimpeded
way, and real-time data to a camera. Our intention is to make social displays that
many can use at once; using VR codes, many can draw data from a display and
control its use on a mobile device. We think of VR Codes as analogous to QR
codes for video, and envision a future where every display in the environment
contains latent information embedded in VR codes.

Tod Machover—Opera of the Future


How musical composition, performance, and instrumentation can lead to innovative
forms of expression, learning, and health.

197. A Toronto Tod Machover and Peter Alexander Torpey


Symphony: Massive
The results of existing crowd-sourced and interactive music are limited so far, with
Musical Collaboration
the public being only a small part of a final musical result, and often disconnected
from the artist leading the project. We believe that a new “musical ecology” is
NEW LISTING needed for true creative collaboration between experts and amateurs that benefits
both. For this purpose, we are creating a new work for symphony orchestra in
collaboration with the entire city of Toronto. Called “A Toronto Symphony,” the
work–commissioned by the Toronto Symphony Orchestra–will be premiered in
March 2013. We are designing the necessary infrastructure, creative tools based on
Hyperscore, social media framework, and real-world community-building activities to
bring together an unprecedented number of people from diverse ages, experiences,
and musical backgrounds to create this new work. We also will establish a model for
creating complex creative collaborations between experts and everyone else.

Page 44 October 2012 MIT Media Lab


198. Advanced Audio Tod Machover and Ben Bloomberg
Systems for Live
This project explores the contribution of advanced audio systems to live
Performance
performance, their design and construction, and their integration into the theatrical
design process. We look specifically at innovative input and control systems for
shaping the analysis and processing of live performance; and at large-scale output
systems which provide a meaningful virtual abstraction to DSP in order to create
flexible audio systems that can both adapt to many environments and achieve a
consistent and precise sound field for large audiences.

199. Death and the Tod Machover, Ben Bloomberg, Peter Torpey, Elena Jessop, Bob Hsiung,
Powers: Redefining Michael Miller, Akito van Troyer, and Eyal Shahar
Opera
"Death and the Powers" is a groundbreaking opera that brings a variety of
technological, conceptual, and aesthetic innovations to the theatrical world. Created
by Tod Machover (composer), Diane Paulus (director), and Alex McDowell
(production designer), the opera uses the techniques of tomorrow to address
age-old human concerns of life and legacy. The unique performance environment,
including autonomous robots, expressive scenery, new Hyperinstruments, and
human actors, blurs the line between animate and inanimate. The opera premiered
in Monte-Carlo in fall 2010, with additional performances in Boston and Chicago in
2011 and continuing engagements worldwide.

200. Designing Immersive Tod Machover and Janice Wang


Multi-Sensory Eating
Food offers a rich multi-modal experience that can deeply affect emotion and
Experiences
memory. We're interested in exploring the artistic and expressive potential of food
beyond mere nourishment, as a means of creating memorable experiences that
NEW LISTING involve multiple senses. For instance, music can change our eating experience by
altering our emotions during the meal, or by evoking a specific time and place.
Similarly, sight, smell, temperature can all be manipulated to combine with food for
expressive effect. In addition, by drawing upon people's physiology and upbringing,
we seek to create individual, meaningful sensory experiences.

201. Disembodied Tod Machover, Peter Torpey and Elena Jessop


Performance
Early in the opera "Death and the Powers," the main character Simon Powers is
subsumed into a technological environment of his own creation. The set comes
alive through robotic, visual, and sonic elements that allow the actor to extend his
range and influence across the stage in unique and dynamic ways. This
environment must assume the behavior and expression of the absent Simon; to
distill the essence of this character, we recover performance parameters in real time
from physiological sensors, voice, and vision systems. Gesture and performance
parameters are then mapped to a visual language that allows the off-stage actor to
express emotion and interact with others on stage. To accomplish this, we
developed a suite of innovative analysis, mapping, and rendering software systems.
Our approach takes a new direction in augmented performance, employing a
non-representational abstraction of a human presence that fully translates a
character into an environment.

202. DrumTop Tod Machover and Akito Oshiro van Troyer

This project aims to transform everyday objects into percussive musical


NEW LISTING
instruments, encouraging people to rediscover their surroundings through musical
interactions with the objects around them. DrumTop is a drum machine made up of
eight transducers. Placing objects on top of the transducers triggers a "hit," causing
sounds to come out from the objects themselves. In addition, users can program
drum patterns by pushing on a transducer, and the weight of an object can be
measured to control the strength of a “hit.”

MIT Media Lab October 2012 Page 45


203. Gestural Media Tod Machover and Elena Jessop
Framework
We are all equipped with two extremely expressive instruments for performance: the
body and the voice. By using computer systems to sense and analyze human
movement and voices, artists can take advantage of technology to augment the
body's communicative powers. However, the sophistication, emotional content, and
variety of expression possible through the original physical channels is often not
captured by or addressed in the technologies used for analyzing them, and thus
cannot be transferred from body to digital media. To address these issues, we are
developing systems that use machine learning to map continuous input data,
whether of gesture or voice or biological/physical states, to a space of expressive,
qualitative parameters. We are also developing a new framework for expressive
performance augmentation, allowing users to easily create clear, intuitive, and
comprehensible mappings by using high-level qualitative movement descriptions,
rather than low-level descriptions of sensor data streams.

204. Hyperinstruments Tod Machover

The Hyperinstrument project creates expanded musical instruments and uses


technology to give extra power and finesse to virtuosic performers. They were
designed to augment a wide range of traditional musical instruments and have been
used by some of the world's foremost performers (Yo-Yo Ma, the Los Angeles
Philharmonic, Peter Gabriel, and Penn & Teller). Research focuses on designing
computer systems that measure and interpret human expression and feeling,
exploring appropriate modalities and content of interactive art and entertainment
environments, and building sophisticated interactive musical instruments for
non-professional musicians, students, music lovers, and the general public. Recent
projects involve both new hyperinstruments for children and amateurs, and high-end
hyperinstruments capable of expanding and transforming a symphony orchestra or
an entire opera stage.

Alumni Contributors: Roberto M. Aimi, Mary Farbood, Ed Hammond, Tristan Jehan,


Margaret Orth, Dan Overholt, Egon Pasztor, Joshua Strickon, Gili Weinberg and
Diana Young

205. Hyperscore Tod Machover

Hyperscore is an application to introduce children and non-musicians to musical


composition and creativity in an intuitive and dynamic way. The "narrative" of a
composition is expressed as a line-gesture, and the texture and shape of this line
are analyzed to derive a pattern of tension-release, simplicity-complexity, and
variable harmonization. The child creates or selects individual musical fragments in
the form of chords or melodic motives, and layers them onto the narrative-line with
expressive brushstokes. The Hyperscore system automatically realizes a full
composition from a graphical representation, allowing individuals with no musical
training to create professional pieces. Currently, Hyperscore uses a mouse-based
interface; the final version will support freehand drawing, and integration with the
Music Shapers and Beatbugs to provide a rich array of tactile tools for manipulation
of the graphical score.

Alumni Contributors: Mary Farbood, Ed Hammond, Tristan Jehan, Margaret Orth,


Dan Overholt, Egon Pasztor, Joshua Strickon, Gili Weinberg and Diana Young

206. Media Scores Tod Machover and Peter Torpey

Media Scores extends the concept of a musical score to other modalities to facilitate
NEW LISTING
the process of authoring and performing multimedia compositions, providing a
medium through which to realize a modern-day Gesamtkunstwerk. Through
research into the representation and the encoding of expressive intent, systems for
composing with media scores are being developed. Using such a tool, the
composer will be able to shape an artistic work that may be performed through

Page 46 October 2012 MIT Media Lab


human and technological means in a variety of media and modalities. Media scores
offer the potential for authoring content considering live performance data and the
potential for audience participation and interaction. This paradigm bridges the
extremes of the continuum from composition to performance, allowing for
improvisatory compositional acts at performance time. The media score also
provides a common point of reference in collaborative productions as well as the
infrastructure for real-time control of technologies used during live performance.

207. Personal Opera Tod Machover and Peter Torpey

Personal Opera is a radically innovative creative environment that enables anyone


to create musical masterpieces sharing one’s deepest thoughts, feelings, and
memories. Based on our design of, and experience with, such projects as
Hyperscore and the Brain Opera, we are developing a totally new environment to
allow the incorporation of personal stories, images, and both original and well-loved
music and sounds. Personal Opera builds on our guiding principle that active music
creation yields far more powerful benefits than passive listening. Using music as the
through-line for assembling and conveying our own individual legacies, Personal
Opera represents a new form of expressive archiving; easy to use and powerful to
experience. In partnership with the Royal Opera House in London, we have begun
conducting Personal Opera workshops specifically targeting seniors to help them
tell their own meaningful stories through music, text, visuals, and acting.

208. Remote Theatrical Tod Machover, Punchdrunk, Akito Van Troyer, Ben Bloomberg, Gershon
Immersion: Dublon, Jason Haas, Elena Jessop, Brian Mayton, Eyal Shahar, Jie Qi,
Nicholas Joliat, and Peter Torpey
Extending "Sleep No
More" We are collaborating with London-based theater group Punchdrunk to create an
online platform connected to their NYC show, Sleep No More. In the live show,
NEW LISTING masked audience members explore and interact with a rich environment,
discovering their own narrative pathways. We have developed an online companion
world to this real-life experience, through which online participants partner with live
audience members to explore the interactive, immersive show together. Pushing the
current capabilities of web standards and wireless communications technologies,
the system delivers personalized multimedia content allowing each online
participant to have a unique experience co-created in real time by his own actions
and those of his onsite partner. This project explores original ways of fostering
meaningful relationships between online and onsite audience members, enhancing
the experiences of both through the affordances that exist only at the intersection of
the real and the virtual worlds.

209. Vocal Vibrations: Tod Machover, Elena Jessop, Rebecca Kleinberger, Le Laboratoire, and The
Expressive Dalai Lama Center at MIT
Performance for
Vocal Vibrations is exploring the relationships between human physiology and the
Body-Mind Wellbeing resonant vibrations of the voice. The voice and body are instruments everyone
possesses–they are incredibly individual, infinitely expressive, and intimately linked
NEW LISTING to one's own physical form. In collaboration with Le Laboratoire in Paris and the
Dalai Lama Center at MIT, we are exploring the hypothesis that the singing voice
can influence mental and physical health through physicochemical phenomena and
in ways consistent with contemplative practices. We are developing a series of
multimedia experiences, including individual "meditations," a group "singing circle,"
and an iPad application, all effecting mood modulation and spiritual enhancement in
an enveloping context of stunningly immersive, responsive music. For Fall 2013, we
are developing a vocal art installation in Paris where private "grotto” environments
allow individual visitors to meditate using vibrations generated by their own voice,
augmented by visual, acoustic, and physical stimuli.

Alumni Contributor: Eyal Shahar

MIT Media Lab October 2012 Page 47


Pattie Maes—Fluid Interfaces
How to integrate the world of information and services more naturally into our daily
physical lives, enabling insight, inspiration, and interpersonal connections.

210. Augmented Product Natan Linder, Pattie Maes and Rony Kubat
Counter
We have created an augmented reality (AR) based product display counter that
transforms any surface or object into an interactive surface, blending digital media
and information with physical space. This system enables shoppers to conduct
research in the store, learn about product features, and talk to a virtual expert to get
advice via built-in video conferencing. The Augmented Product Counter is based on
LuminAR technology, which can transform any standard product counter, enabling
shoppers to get detailed information on products as well as web access to read
unbiased reviews, compare pricing, and conduct research while they interact with
real products. This system delivers an innovative in-store shopping experience
combining live product interactions in a physical environment with the vast amount
of information available on the web in an engaging and interactive manner.

211. Blossom Pattie Maes and Sajid Sadi

Blossom is a multiperson awareness system that uses ioMaterials-based


techniques to connect distant friends and family. It provides an awareness medium
that does not rely on the attention- and reciprocity-demanding interfaces that are
provided by digital communication media such as mobile phones, SMS, and email.
Combining touch-based input with visual, haptic, and motile feedback, Blossoms
are created as pairs that can communicate over the network, echoing the conditions
of each other and forming an implicit, always-there link that physically express
awareness, while retaining the instantaneous capabilities that define digital
communication.

212. Community Data Pattie Maes and Doug Fritz


Portrait
As research communities grow, it is becoming increasingly difficult to understand
the dynamics of the community; their history and the varying perspective with which
they are interpreted. As our information becomes more digital, the histories and
artifacts of community become increasingly hidden. The purpose here is to show a
given researcher how they fit into the background of a larger community, hopefully
strengthening weak ties and understanding. At a high level this project is intended
to have real impact by allowing the Media Lab community to reflect on what things it
has been working on over the past 25 years and where it should be heading next.
On a more individual level this is intended to help researchers within the community
situate themselves by better understanding the research directions and interests of
their collaborators.

Page 48 October 2012 MIT Media Lab


213. Cornucopia: Digital Marcelo Coelho
Gastronomy
Cornucopia is a concept design for a personal food factory, bringing the versatility of
the digital world to the realm of cooking. In essence, it is a 3D printer for food that
works by storing, precisely mixing, depositing, and cooking layers of ingredients.
Cornucopia's cooking process starts with an array of food canisters that refrigerate
and store a user's favorite ingredients. These are piped into a mixer and extruder
head that can accurately deposit elaborate combinations of food; while this takes
place, the food is heated or cooled. This fabrication process not only allows for the
creation of flavors and textures that would be completely unimaginable through
other cooking techniques, but it also allows the user to have ultimate control over
the origin, quality, nutritional value, and taste of every meal.

Alumni Contributors: William J. Mitchell and Amit Zoran

214. Defuse Aaron Zinman, Judith Donath and Pattie Maes

Defuse is a commenting platform that rethinks the medium's basic interactions. In a


world where a single article in The New York Times can achieve 3,000 comments,
the original design of public asynchronous text systems has reached its limit; it
needs more than social convention. Defuse uses context to change the basics of
navigation and message posting. It uses a combination of machine learning,
visualization, and structural changes to achieve this goal.

215. Display Blocks Pattie Maes and Pol Pla i Conesa

Display Blocks is a novel approach to display technology, which consists of


NEW LISTING
arranging six organic light emitting diode screens in a cubic form factor. The aim of
the project is to explore the possibilities that this type of display holds for data
visualization, manipulation and exploration. The research focuses on exploring how
the physicality of the screen can be leveraged to better interpret its contents. To this
end, the physical design is accompanied by a series of applications that
demonstrate the advantages of this technology.

216. EyeRing: A Compact, Suranga Nanayakkara and Roy Shilkrot


Intelligent Vision
EyeRing is a wearable intuitive interface that allows a person to point at an object to
System on a Ring
see or hear more information about it. We came up with the idea of a micro camera
worn as a ring on the index finger with a button on the side, which can be pushed
NEW LISTING with the thumb to take a picture or a video that is then sent wirelessly to a mobile
phone to be analyzed. The user receives information about the object in either
auditory or visual form. Future versions of our proposed system may include more
sensors to allow non-visual data capture and analysis. This finger-worn
configuration of sensors opens up a myriad of possible applications for the visually
impaired as well as the sighted.

217. FlexDisplays Pattie Maes, Juergen Steimle, and Simon Olberding

We believe that in the near future many portable devices will have resizable
NEW LISTING
displays. This will allow for devices with a very compact form factor, which can
unfold into a large display when needed. In this project, we design and study novel
interaction techniques for devices with flexible, rollable, and foldable displays. We
explore a number of scenarios, including personal and collaborative uses.

218. Hyperego Pattie Maes and Aaron Zinman

When we meet new people in real life, we assess them using a multitude of signals
relevant to our upbringing, society, and our experiences and disposition. When we
encounter a new individual virtually, usually we are looking at a single
communication instance in bodiless form. How can we gain a deeper understanding
of this individual without the cues we have in real life? Hyperego aggregates

MIT Media Lab October 2012 Page 49


information across various online services to provide a more uniform data portrait of
the individual. These portraits are at the user's control, allowing specific data to be
hidden, revealed, or grouped in aggregate using an innovative privacy model.

219. Inktuitive: An Intuitive Pranav Mistry and Kayato Sekiya


Physical Design
Despite the advances and advantages of computer-aided design tools, the
Workspace
traditional pencil and paper continue to exist as the most important tools in the early
stages of design. Inktuitive is an intuitive physical design workspace that aims to
bring together conventional design tools such as paper and pencil with the power
and convenience of digital tools for design. Inktuitive also extends the natural
work-practice of using physical paper by giving the pen the ability to control the
design in physical, 3-D, freeing it from its tie to the paper. The intuition of pen and
paper are still present, but lines are captured and translated into shapes in the
digital world. The physical paper is augmented with overlaid digital strokes.
Furthermore, the platform provides a novel interaction mechanism for drawing and
designing using above the surface pen movements.

220. InterPlay: Full-Body Pattie Maes, Seth Hunter and Pol Pla i Conesa
Interaction Platform
InterPlay is a platform for designers to create dynamic social simulations that
transform public spaces into immersive environments where people become the
central agents. It uses computer vision and projection to facilitate full-body
interaction with digital content. The physical world is augmented to create shared
experiences that encourage active play, negotiation, and creative composition.

221. ioMaterials Pattie Maes, Sajid Sadi and Amir Mikhak

ioMaterials is a project encompassing a variety of collocated sensing-actuation


platforms. The project explores various aspects of dense sensing for humane
communication, memory, and remote awareness. Using dense collocated sensing
actuation and sensing, we can change common objects into an interface capable of
hiding unobtrusively in plain sight. Relational Pillow and Blossom are instantiations
of this ideal.

222. Liberated Pixels Susanne Seitinger

We are experimenting with systems that blur the boundary between urban lighting
and digital displays in public spaces. These systems consist of liberated pixels,
which are not confined to rigid frames as are typical urban screens. Liberated pixels
can be applied to existing horizontal and vertical surfaces in any configuration, and
communicate with each other to enable a different repertoire of lighting and display
patterns. We have developed Urban Pixels a wireless infrastructure for liberated
pixels. Composed of autonomous units, the system presents a programmable and
distributed interface that is flexible and easy to deploy. Each unit includes an
on-board battery, RF transceiver unit, and microprocessor. The goal is to
incorporate renewable energy sources in future versions.

Alumni Contributor: William J. Mitchell

Page 50 October 2012 MIT Media Lab


223. Light.Bodies Susanne Seitinger, Alex S. Taylor and Microsoft Research

“Light bodies” are mobile and portable, hand-held lights that respond to audio and
vibration input. The motivation to build these devices is grounded in a historical
reinterpretation of street lighting. Before fixed infrastructure illuminated cities at
night, people carried lanterns with them to make their presence known. Using this
as our starting point, we asked how we might engage people in more actively
shaping the lightscapes which surround them. A first iteration of responsive,
LED-based colored lights were designed for use in three different settings including
a choreographed dance performance, an outdoor public installation and an
audio-visual event.

Alumni Contributor: William J. Mitchell

224. LuminAR Natan Linder, Pattie Maes and Rony Kubat

LuminAR reinvents the traditional incandescent bulb and desk lamp, evolving them
into a new category of robotic, digital information devices. The LuminAR Bulb
combines a Pico-projector, camera, and wireless computer in a compact form
factor. This self-contained system enables users with just-in-time projected
information and a gestural user interface, and it can be screwed into standard light
fixtures everywhere. The LuminAR Lamp is an articulated robotic arm, designed to
interface with the LuminAR Bulb. Both LuminAR form factors dynamically augment
their environments with media and information, while seamlessly connecting with
laptops, mobile phones, and other electronic devices. LuminAR transforms surfaces
and objects into interactive spaces that blend digital media and information with the
physical space. The project radically rethinks the design of traditional lighting
objects, and explores how we can endow them with novel augmented-reality
interfaces.

225. MemTable Pattie Maes, Seth Hunter, Alexandre Milouchev and Emily Zhao

MemTable is a table with a contextual memory. The goal of the system is to


facilitate reflection on the long-term collaborative work practices of a small group by
designing an interface that supports meeting annotation, process documentation,
and visualization of group work patterns. The project introduces a tabletop designed
both to remember how it is used and to provide an interface for contextual retrieval
of information. MemTable examines how an interface that embodies the history of
its use can be incorporated into our daily lives in more ergonomic and meaningful
contexts.

226. Mouseless Pranav Mistry and Pattie Maes

Mouseless is an invisible computer mouse that provides the familiarity of interaction


of a physical mouse without actually needing a real hardware mouse. Despite the
advances in computing hardware technologies, the two-button computer mouse has
remained the predominant means to interact with a computer. Mouseless removes
the requirement of having a physical mouse altogether, but still provides the intuitive
interaction of a physical mouse with which users are familiar.

227. Moving Portraits Pattie Maes

Moving portrait is a framed portrait that is aware of and reacts to viewers’ presence
and body movements. A portrait represents a part of our lives and reflects our
feelings, but it is completely oblivious to the events that occur around it or to the
people who view it. By making a portrait interactive, we create a different and more
engaging relationship between it and the viewer.

MIT Media Lab October 2012 Page 51


228. MTM "Little John" Natan Linder

MTM "Little John" is a multi-purpose, mid-size, rapid prototyping machine with the
goal of being a personal fabricator capable of performing a variety of tasks (3D
printing, milling, scanning, vinyl cutting) at a price point in the hundreds rather than
thousands of dollars. The machine was designed and built in collaboration with the
MTM—Machines that Make Project at MIT Center for Bits and Atoms.

229. Perifoveal Display Valentin Heun, Anette von Kapri and Pattie Maes

Today's GUIs are made for small screens with little information shown. Real-time
NEW LISTING
data that goes beyond one small screen needs to be continuously scanned with our
eyes in order to create an abstract model of it in one's mind. GUIs therefore do not
work with huge amounts of data. The Perifoveal Display takes this abstraction from
the user and visualizes it in a way that the full range of vision can be used for data
monitoring. This can be realized by taking care of the different visual systems in our
eye. Our vision has a field of view from about 120°, which is highly sensitive for
motion. 6° of our vision is very slow but complex enough to read text.

230. PoCoMo Pattie Maes, Seth Hunter and Roy Shilkrot

PoCoMo is an implementation of a vision in future-projected social interfaces. In this


project we try to capture the playfulness of collaborative gaming and apply it to
projected interfaces. The maturing of handheld micro-projector technology, in
conjunction with advanced mobile environments, enable this novel type of
interaction. Our system is made of a micro-projector mobile device with a specially
designed case that turns it into a first-of-a-kind handheld mini-projector-camera
system. Computer Vision algorithms support collaborative interaction between
multiple users of the system. Through PoCoMo, we wish to explore the social
nature of projected interfaces. To accommodate this we designed the projection to
be of human cartoon-like characters that play out a personal interaction. Following
their human controllers, they recognize each other, wave hello, shake hands, and
exchange presents.

231. PreCursor Pranav Mistry and Pattie Maes

'PreCursor' is an invisible layer that hovers in front of the screen and enables novel
NEW LISTING
interaction that reaches beyond the current touchscreens. Using a computer mouse
provides two levels of depth when interacting with content on a screen. One can just
hover or can click. Hover allows receiving short descriptions, while click selects or
performs an action. PreCursor provides this missing sense of interaction to
touchscreens. PreCursor technology has the potential to expand beyond a basic
computer screen. It can also be applied to mobile touchscreens to objects in the
real world, or can be the launching pad for creating a 3D space for interaction.

232. Pulp-Based Marcelo Coelho, Pattie Maes, Joanna Berzowska and Lyndl Hall
Computing: A
Pulp-Based Computing is a series of explorations that combine smart materials,
Framework for
papermaking, and printing. By integrating electrically active inks and fibers during
Building Computers the papermaking process, it is possible to create sensors and actuators that
Out of Paper behave, look, and feel like paper. These composite materials not only leverage the
physical and tactile qualities of paper, but can also convey digital information,
spawning new and unexpected application domains in ubiquitous and pervasive
computing at extremely affordable costs.

233. Quickies: Intelligent Pranav Mistry and Pattie Maes


Sticky Notes
The goal of Quickies is to bring one of the most useful inventions of the twentieth
century into the digital age: the ubiquitous sticky note. Quickies enriches the
experience of using sticky notes by linking hand-written sticky notes to mobile
phones, digital calendars, task-lists, email, and instant messaging clients. By

Page 52 October 2012 MIT Media Lab


augmenting the familiar, ubiquitous sticky note, Quickies leverages existing patterns
of behavior, merging paper-based sticky note usage with the user's informational
experience. The project explores how the use of artificial intelligence (AI), natural
language processing (NLP), RFID, and ink-recognition technologies can make it
possible to create intelligent sticky notes that can be searched, located, can send
reminders and messages, and more broadly, can act as an I/O interface to the
digital information world.

234. ReachIn Anette von Kapri, Seth Hunter, and Pattie Maes

Remote collaboration systems are still far from offering the same rich experience
NEW LISTING
that collocated meetings provide. Collaborators can transmit their voice and face at
a distance, but it is very hard to point at physical objects and interpret gestures.
ReachIn explores how remote collaborators can "reach into" a shared digital
workspace where they can manipulate virtual objects and data. The collaborators
see their live 3D recreated mesh in a shared virtual space and can point at data or
3D models. They can grab digital objects with their bare hands, and translate, scale,
and rotate them.

235. ReflectOns: Mental Pattie Maes and Sajid Sadi


Prostheses for
ReflectOns are objects that help people think about their actions and change their
Self-Reflection
behavior based on subtle, ambient nudges delivered at the moment of action.
Certain tasks—such as figuring out the number of calories consumed, or amount of
money spent eating out—are generally difficult for the human mind to grapple with.
By using in-place sensing combined with gentle feedback and understanding of
users' goals, we can recognize behaviors and trends, and provide a reflection of
their own actions tailored to enable both better understanding of the repercussions
of those actions, and changes to their behaviors to help them better match their own
goals.

236. Remnant: Pattie Maes and Sajid Sadi


Handwriting Memory
Remnant is a greeting card that merges the affordances of physical materials with
Card
the temporal malleability of digital systems to create, enshrine, and reinforce the
very thing that makes a greeting personal: that hand of the sender. The card
records both the timing and the form of the sender's handwriting when it is first
used. At a later time, collocated output recreates the handwriting, allowing the
invisible, memorized hand of the sender to write his or her message directly in front
of the recipient.

237. Sensei: A Mobile Tool Pattie Maes, Suranga Nanayakkara and Roy Shilkrot
for Language
Sensei is a mobile interface for language learning (words, sentences,
Learning
pronunciation). It combines techniques from computer vision, augmented reality,
speech recognition, and commonsense knowledge. In the current prototype, the
user points his cell phone at an object and then sees the word and hears it
pronounced in the language of his choice. The system also shows more information
pulled from a commonsense knowledge base. The interface is primarily designed to
be used as an interactive and fun language-learning tool for children. Future
versions will be applied to other contexts such as real-time language translation for
face-to-face communication and assistance to travelers for reading information
displays in foreign languages; in addition, future versions will provide feedback to
users about whether they are pronouncing words correctly. The project is
implemented on a Samsung Galaxy phone running Android, donated by Samsung
Corporation.

MIT Media Lab October 2012 Page 53


238. Shutters: A Marcelo Coelho and Pattie Maes
Permeable Surface
Shutters is a permeable kinetic surface for environmental control and
for Environmental
communication. It is composed of actuated louvers (or shutters) that can be
Control and individually addressed for precise control of ventilation, daylight incidence, and
Communication information display. By combining smart materials, textiles, and computation,
Shutters builds upon other facade systems to create living environments and work
spaces that are more energy efficient, while being aesthetically pleasing and
considerate of their inhabitants' activities.

239. Siftables: Physical Pattie Maes


Interaction with
Siftables are compact electronic devices with motion sensing, graphical display, and
Digital Media
wireless communication. One or more Siftables may be physically manipulated to
interact with digital information and media. A group of Siftables can thus act in
concert to form a physical, distributed, gesture-sensitive, human-computer interface.
Each Siftable object is stand-alone (battery-powered and wireless); Siftables do not
require installed infrastructure such as large displays, instrumented tables, or
cameras in order to be used. Siftables' key innovation is to give direct physical
embodiment to information items and digital media content, allowing people to use
their hands and bodies to manipulate these data instead of relying on virtual cursors
and windows. By leveraging people’s ability to manipulate physical objects,
Siftables radically simplify the way we interact with information and media.

Alumni Contributors: Jeevan James Kalanithi and David Merrill

240. Six-Forty by Marcelo Coelho and Jamie Zigelbaum


Four-Eighty: An
Six-Forty by Four-Eighty is an interactive lighting system composed of an array of
Interactive Lighting
magnetic physical pixels. Individually, pixel-tiles change their color in response to
System touch and communicate their state to each other by using a person's body as the
conduit for information. When grouped together, the pixel-tiles create patterns and
animations that can serve as a tool for customizing our physical spaces. By
transposing the pixel from the confines of the screen and into the physical world,
focus is drawn to the materiality of computation and new forms for design emerge.

241. SixthSense Pranav Mistry

Information is often confined to paper or computer screens. SixthSense frees data


from these confines and seamlessly integrates information and reality. With the
miniaturization of computing devices, we are always connected to the digital world,
but there is no link between our interactions with these digital devices and our
interactions with the physical world. SixthSense bridges this gap by augmenting the
physical world with digital information, bringing intangible information into the
tangible world. Using a projector and camera worn as a pendant around the neck,
SixthSense sees what you see and visually augments surfaces or objects with
which you interact. It projects information onto any surface or object, and allows
users to interact with the information through natural hand gestures, arm
movements, or with the object itself. SixthSense makes the entire world your
computer.

242. SPARSH Pranav Mistry, Suranga Nanayakkara, and Pattie Maes

SPARSH explores a novel interaction method to seamlessly transfer data among


multiple users and devices in a fun and intuitive way. A user touches a data item to
be copied from a device, conceptually saving the item in his or her body. Next, the
user touches the other device to which he or she wants to paste/pass the saved
content. SPARSH uses touch-based interactions as indications for what to copy and
where to pass it. Technically, the actual transfer of media happens via the
information cloud.

Page 54 October 2012 MIT Media Lab


243. Spotlight Pattie Maes and Sajid Sadi

Spotlight is about an artist's ability to create a new meaning using the combination
of interactive portraits and diptych or polyptych layouts. The mere placement of two
or more portraits near each other is a known technique to create a new meaning in
the viewer's mind. Spotlight takes this concept into the interactive domain, creating
interactive portraits that are aware of each other's state and gesture. So not only the
visual layout, but also the interaction with others creates a new meaning for the
viewer. Using a combination of interaction techniques, Spotlight engages the viewer
at two levels. At the group level, the viewer influences the portrait's "social
dynamics." At the individual level, a portrait's "temporal gestures" expose much
about the subject's personality.

Alumni Contributor: Orit Zuckerman

244. Sprout I/O: A Marcelo Coelho and Pattie Maes


Texturally Rich
Sprout I/O is a kinetic fur that can capture, mediate, and replay the physical
Interface
impressions we leave in our environment. It combines embedded electronic
actuators with a texturally rich substrate that is soft, fuzzy, and pliable to create a
dynamic structure where every fur strand can sense physical touch and be
individually moved. By developing a composite material that collocates kinetic I/O,
while preserving the expectations that we normally have from interacting with
physical things, we can more seamlessly embed and harness the power of
computation in our surrounding environments to create more meaningful interfaces
for our personal and social activities.

245. Surflex: A Marcelo Coelho and Pattie Maes


Shape-Changing
Surflex is a programmable surface for the design and visualization of physical
Surface
objects and spaces. It combines the different memory and elasticity states of its
materials to deform and gain new shapes, providing a novel alternative for 3-D
fabrication and the design of physically adaptable interfaces.

246. Swyp Natan Linder and Alexander List

With Swyp you can transfer any file from any app to any app on any device: simply
NEW LISTING
with a swipe of a finger. Swyp is a framework facilitating cross-app, cross-device
data exchange using physical "swipe" gestures. Our framework allows any number
of touch-sensing and collocated devices to establish file-exchange and
communications with no pairing other than a physical gesture. With this inherent
physical paradigm, users can immediately grasp the concepts behind
device-to-device communications. Our prototypes application Postcards explore
touch-enabled mobile devices connected to the LuminAR augmented surface
interface. Postcards allows users to collaborate and create a digital postcards using
Swyp interactions. We demonstrate how Swyp enabled interfaces can support new
generation of interactive workspaces possible by allowing pair-free gesture-based
communications to and from collocated devices.

MIT Media Lab October 2012 Page 55


247. TaPuMa: Tangible Pranav Mistry and Tsuyoshi Kuroki
Public Map
TaPuMa is a digital, tangible, public map that allows people to use everyday objects
they carry to access relevant, just-in-time information and to find locations of places
or people. TaPuMa envisions that conventional maps can be augmented with the
unique identities and affordances of the objects. TaPuMa uses an environment
where map and dynamic content is projected on a tabletop. A camera mounted
above the table identifies and tracks the locations of the objects on the surface, and
a software program identifies and registers the location of objects. After identifying
the objects, the software provides relevant information visualizations directly on the
table. The projector augments both object and table with projected digital
information. TaPuMa explores a novel interaction mechanism where physical
objects are used as interfaces to digital information. It allows users to acquire
information through tangible media, the things they carry.

248. TeleStudio Seth Hunter

TeleKinect is a peer to peer software for creative tele-video interactions. The


NEW LISTING
environment can be used to interact with others in the same digital window at a
distance such as: presenting a powerpoint together, broadcasting your own news,
creating an animation, acting/dancing with any online video, overdub-commentary,
teaching, creating a puppet show, storytelling, social TV viewing, and exercising
together. The system tracks gestures and objects in the local environment and
maps them to virtual objects and characters. It allows users to creatively bridge the
physical and digital meeting spaces by defining their own mappings.

249. Textura Pattie Maes, Marcelo Coelho and Pol Pla i Conesa

Textura is an exploration of how to enhance white objects with textures. By


projecting onto any white surface, we can simulate different textures and materials.
We envision this technology to have great potential for customization and
personalization, and to be applicable to areas such as industrial design, the game
industry, and retail scenarios.

250. The Relative Size of Marcelo Coelho and Pattie Maes


Things
The Relative Size of Things is a low-cost 3D scanner for the microscopic world. It
combines a webcam, a three-axis computer-controlled plotter, and image
NEW LISTING
processing to merge hundreds of photographs into a single three-dimensional scan
of surface features which are invisible to the naked eye.

251. thirdEye Pranav Mistry and Pattie Maes

thirdEye is a new technique that enables multiple viewers to see different things on
a same display screen at the same time. With thirdEye: a public sign board can
show a Japanese tourist instructions in Japanese and an American in English;
games won't need a split screen anymore—each player can see his or her personal
view of the game on the screen; two people watching TV can watch their favorite
channel on a single screen; a public display can show secret messages or patterns;
and in the same movie theater, people can see different ends of a suspense movie.

Page 56 October 2012 MIT Media Lab


252. Transitive Materials: Pattie Maes, Marcelo Coelho, Neri Oxman, Sajid Sadi, Amit Zoran and Amir
Towards an Mikhak
Integrated Approach
Transitive Materials is an umbrella project encompassing novel materials,
to Material fabrication technologies, and traditional craft techniques that can operate in unison
Technology to create objects and spaces that realize truly omnipresent interactivity. We are
developing interactive textiles, ubiquitous displays, and responsive spaces that
seamlessly couple input, output, processing, communication, and power
distribution, while preserving the uniqueness and emotional value of physical
materials and traditional craft. Life in a Comic, Physical Heart in a Virtual Body,
Augmented Pillows, Flexible Urban Display, Shutters, Sprout I/O, and Pulp-Based
Computing are current instantiations of these technologies.

253. VisionPlay Pattie Maes and Seth Hunter

VisionPlay is a framework to support the development of augmented play


experiences for children. We are interested in exploring mixed reality applications
enabled by web cameras, computer vision techniques, and animation that are more
socially oriented and physically engaging. These include using physical toys to
control digital characters, augmenting physical play environments with projection,
and merging representations of the physical world with virtual play spaces.

254. Watt Watcher Pattie Maes, Sajid Sadi and Eben Kunz

Energy is the backbone of our technological society, yet we have great difficulty
understanding where and how much of it is used. Watt Watcher is a project that
provides in-place feedback on aggregate energy use per device in a format that is
easy to understand and intuitively compare. Energy is inherently invisible, and its
use is often sporadic and difficult to gauge. How much energy does your laptop use
compared to your lamp? Or perhaps your toaster? By giving users some intuition
regarding these basic questions, this ReflectOn allows users both to understand
their use patterns and form new, more informed habits.

Frank Moss—New Media Medicine


How radical new collaborations will catalyze a revolution in health.

255. CollaboRhythm Frank Moss, John Moore MD, Scott Gilroy, Joslin Diabetes Clinic, UMass
Medical School, Department of Veterans Affairs, Children's Hospital Boston,
Boston Medical Center

CollaboRhythm is a platform that enables patients to be at the center of every


interaction in their healthcare with the goal of empowering them to be involved,
reflective, and proactive. Care can be coordinated securely through cell phones,
tablets, televisions, and computers so that support can be provided in real-time in
the real world instead of through inconvenient doctor's office visits. We are currently
developing and demonstrating applications for diabetes and hypertension
management. A number of third parties have also developed exciting applications
using CollaboRhythm. Please visit http://newmed.media.mit.edu to learn about how
you can build a project with us using CollaboRhythm.

256. Collective Discovery Frank Moss and Ian Eslick

The choices we make about diet, environment, medications, or alternative therapies


constitute a massive collection of "everyday experiments." These data remain
largely unrecorded and are underutilized by traditional research institutions.
Collective Discovery aims to leverage the intuition and insight of patient

MIT Media Lab October 2012 Page 57


communities to generate datasets about everyday experiences. We emphasize
patient use of the experimental process by tracking and assessing the impact of
everyday experiments on their bodies and lives. Large-scale datasets of such
interventions yield powerful predictors that will lead to better individual
decision-making, stronger self-advocacy, identification of novel therapies, and
inspire better hypotheses in the traditional research context, accelerating the search
for new drugs and therapies.

257. ForgetAboutIT? John Moore MD and Frank Moss

ForgetAboutIT has become an integrated part of CollaboRhythm. Currently only


50% of patients with chronic diseases take their medications. The problem is not
simple forgetfulness; it is a complex combination of lack of understanding, poor
self-reflection, limited social support, and almost non-existent communication
between provider and patient. ForgetAboutIT? is a system to support medication
adherence which presupposes that patients engaged in tight, collaborative
communication with their providers through interactive interfaces would think it
preposterous not to take their medications. Technically, it is an awareness system
that employs ubiquitous connectivity on the patient side through cell phones,
televisions, and other interactive devices and a multi-modal collaborative
workstation on the provider side.

258. I'm Listening John Moore MD, Henry Lieberman and Frank Moss

Increasing understanding of how to categorize patient symptoms for efficient


diagnosis has led to structured patient interviews and diagnostic flowcharts that can
provide diagnostic accuracy and save valuable physician time. But the rigidity of
predefined questions and controlled vocabulary for answers can leave patients
feeling over-constrained, as if the doctor (or computer system) is not really
attending to them. I’m Listening is a system for automatically conducting patient
pre-visit interviews. It does not replace a human doctor, but can be used before an
office visit to prepare the patient, deliver educational materials or triage care, and
preorder appropriate tests, making better use of both doctor and patient time. It
uses an on-screen avatar and natural language processing to (partially) understand
the patient's response. Key is a common-sense reasoning system that lets patients
express themselves in unconstrained natural language, even using metaphor, and
that maps the language to medically relevant categories.

259. Oovit PT Mar Gonzalez, John Moore, and Frank Moss

Patient adherence to physical therapy regimens is poor, and there is a lack of


quantitative data about patient performance, particularly at home. This project is an
end-to-end virtual rehabilitation system for supporting patient adherence to home
exercise that addresses the multi-factorial nature of the problem. The physical
therapist and patient make shared decisions about appropriate exercises and goals
and patients use a sensor-enabled gaming interface at home to perform exercises.
Quantitative data is then fed back to the therapist, who can properly adjust the
regimen and give reinforcing feedback and support.

Page 58 October 2012 MIT Media Lab


Neri Oxman—Mediated Matter
How digital and fabrication technologies mediate between matter and environment to
radically transform the design and construction of objects, buildings, and systems.

260. 3D Printing of Neri Oxman and Steven Keating


Functionally Graded
Functionally graded materials–materials with spatially varying composition or
Materials
microstructure–are omnipresent in nature. From palm trees with radial density
gradients, to the spongy trabeculae structure of bone, to the hardness gradient
found in many types of beaks, graded materials offer material and structural
efficiency. But in man-made structures such as concrete pillars, materials are
typically volumetrically homogenous. While using homogenous materials allows for
ease of production, improvements in strength, weight, and material usage can be
obtained by designing with functionally graded materials. To achieve graded
material objects, we are working to construct a 3D printer capable of dynamic
mixing of composition material. Starting with concrete and UV-curable polymers, we
aim to create structures, such as a bone-inspired beam, which have functionally
graded materials. This research was sponsored by the NSF EAGER award:
Bio-Beams: FGM Digital Design & Fabrication.

261. Beast Neri Oxman

Beast is an organic-like entity created synthetically by the incorporation of physical


parameters into digital form-generation protocols. A single continuous surface,
acting both as structure and as skin, is locally modulated for both structural support
and corporeal aid. Beast combines structural, environmental, and corporeal
performance by adapting its thickness, pattern density, stiffness, flexibility, and
translucency to load, curvature, and skin-pressured areas respectively.

262. Building-Scale 3D Neri Oxman and Steven Keating


Printing
How can additive fabrication technologies be scaled to building-sized construction?
We introduce a novel method of mobile swarm printing that allows small robotic
NEW LISTING
agents to construct large structures. The robotic agents extrude a fast curing
material which doubles as both a concrete mold for structural walls and as a thermal
insulation layer. This technique offers many benefits over traditional construction
methods, such as speed, custom geometry, and cost. As well, direct integration of
building utilities like wiring and plumbing can be incorporated into the printing
process. This research was sponsored by the NSF EAGER award: Bio-Beams:
FGM Digital Design & Fabrication.

263. Carpal Skin Neri Oxman

Carpal Skin is a prototype for a protective glove to protect against Carpal Tunnel
Syndrome, a medical condition in which the median nerve is compressed at the
wrist, leading to numbness, muscle atrophy, and weakness in the hand. Night-time
wrist splinting is the recommended treatment for most patients before going into
carpal tunnel release surgery. Carpal Skin is a process by which to map the
pain-profile of a particular patient—its intensity and duration—and to distribute hard
and soft materials to fit the patient’s anatomical and physiological requirements,
limiting movement in a customized fashion. The form-generation process is inspired
by animal coating patterns in the control of stiffness variation.

MIT Media Lab October 2012 Page 59


264. CNSILK Pavilion Neri Oxman, Carlos Gonzalez, Markus Kayser and Jared Laucks

The CNSILK Pavilion extends current development of CNSILK research into


NEW LISTING
large-scale inhabitable spaces. Rigorous study and analysis of micro-scale fibrous
structures akin to silkworm cocoons and spiderwebs is underway in collaboration
with Tufts University and the Wyss Institute. Through this research the team will
develop a process of analysis and feedback while experimenting with multi-scalar
composite shell environments. Research and analysis at the micro-scale will aid in a
greater understanding of fibrous systems, traditionally used in tension, across
various scales to develop habitable space. The synthesis between biology, material
science, and computation, coupled with large-scale, multi-axis, robotic fabrication
opens new avenues for embedded performance-based design at a habitable scale.
This approach will allow us to create an environmentally tailored pavilion for an
event in the spring of 2013.

265. CNSILK: Computer Neri Oxman


Numerically
CNSILK explores the design and fabrication potential of silk fibers—inspired by
Controlled Silk
silkworm cocoons—for the construction of woven habitats. It explores a novel
Cocoon Construction approach to the design and fabrication of silk-based building skins by controlling the
mechanical and physical properties of spatial structures inherent in their
microstructures using multi-axes fabrication. The method offers construction without
assemblies such that material properties vary locally to accommodate for structural
and environmental requirements. This approach stands in contrast to functional
assemblies and kinetically actuated facades which require a great deal of energy to
operate, and are typically maintained by global control. Such material architectures
could simultaneously bear structural load, change their transparency so as to
control light levels within a spatial compartment (building or vehicle), and open and
close embedded pores so as to ventilate a space.

266. Digitally Neri Oxman, Benjamin Peters and Eric Marion


Reconfigurable
The digitally reconfigurable surface is a pin matrix apparatus for directly creating
Surface
rigid 3D surfaces from a computer-aided design (CAD) input. A digital design is
uploaded into the device, and a grid of thousands of tiny pins–much like the popular
NEW LISTING pin-art toy–are actuated to form the desired surface. A rubber sheet is held by
vacuum pressure onto the tops of the pins to smooth out the surface formed by
them; this surface can then be used for industrial forming operations, simple resin
casting, and many other applications. The novel phase-changing electronic clutch
array allows the device to have independent position control over thousands of
discrete pins with only a single motorized 'push plate,' lowering the complexity and
manufacturing cost of this type of device.

267. FABRICOLOGY: Neri Oxman


Variable-Property 3D
Rapid prototyping technologies speed product design by facilitating visualization
Printing as a Case for
and testing of prototypes. However, such machines are limited to using one material
Sustainable at a time; even high-end 3D printers, which accommodate the deposition of multiple
Fabrication materials, must do so discretely and not in mixtures. This project aims to build a
proof-of-concept of a 3D printer able to dynamically mix and vary the ratios of
different materials in order to produce a continuous gradient of material properties
with real-time correspondence to structural and environmental constraints.

Alumni Contributors: Mindy Eng, William J. Mitchell and Rachel Fong

268. FitSocket: A Better Hugh Herr, Neri Oxman, Arthur Petron and Roy Kornbluh (SRI)
Way to Make Sockets
Sockets–the cup-shaped devices that attach an amputated limb to a lower-limb
prosthesis–are made through unscientific, artisanal methods that do not have
repeatable quality and comfort from one amputee to the next. The FitSocket project
aims to identify the correlation between leg tissue properties and the design of a

Page 60 October 2012 MIT Media Lab


comfortable socket. We accomplish this by creating a programmable socket called
the FitSocket which can iterate over hundreds of socket designs in minutes instead
of months.

269. Macro Atom Additive Neri Oxman and Benjamin Peters


Manufacturing
Inspired by the success of the fusible alloy clutch utilized in the digitally
reconfigurable surface actuation system, we have been looking into the possibility of
NEW LISTING
abstracting this concept into three dimensions, using fusible alloy to attach spheres
or other particles together. In a simple case this involves plating micro-milli spheres
(metal, plastic, glass, etc.) in a solder wetting material (tin, silver, gold, copper, etc.)
and then plating that coating with a low temperature solder alloy so that it can be
reversibly “sintered” to adjacent particles. In a more complex case, particles would
have internal electronics that turn on or off (by heating) bond plates, resulting in a
more “atom-like” particle that could self-assemble or self-disassemble.

270. Mobile Office Neri Oxman and Benjamin Peters

A fast moving workplace, calls for... a fast moving workstation! The mobile office is
NEW LISTING
a prototype robotic office fitted with a remote controlled, motorized base, onboard
AC power storage for 6-8 hours, and 4 axis robotic arm. The mobile office is great
for taking your work down into the machine shop or to lengthy collaboration
meetings.

271. Monocoque Neri Oxman

French for "single shell," Monocoque stands for a construction technique that
supports structural load using an object's external skin. Contrary to the traditional
design of building skins that distinguish between internal structural frameworks and
non-bearing skin elements, this approach promotes heterogeneity and
differentiation of material properties. The project demonstrates the notion of a
structural skin using a Voronoi pattern, the density of which corresponds to
multi-scalar loading conditions. The distribution of shear-stress lines and surface
pressure is embodied in the allocation and relative thickness of the vein-like
elements built into the skin. Its innovative 3D printing technology provides for the
ability to print parts and assemblies made of multiple materials within a single build,
as well as to create composite materials that present preset combinations of
mechanical properties.

272. Morphable Structures Neri Oxman and Steven Keating

Granular materials can be put into a jammed state through the application of
pressure to achieve a pseudo-solid material with controllable rigidity and geometry.
While jamming principles have been long known, large-scale applications of
jammed structures have not been significantly explored. The possibilities for
shape-changing machines and structures are vast and jamming provides a
plausible mechanism to achieve this effect. In this work, jamming prototypes are
constructed to gain a better understanding of this effect. As well, potential specific
applications are highlighted and demoed. Such applications range from a
morphable chair, to a floor which dynamically changes its softness in response to a
user falling down to reduce injury, to artistic free-form sculpting.

273. PCB Origami Neri Oxman and Yoav Sterman

The PCB Origami project is an innovative concept for printing digital materials and
NEW LISTING
creating 3D objects with Rigid-flex PCBs and pick and place machines. These
machines allow printing of digital electronic materials, while controlling the location
and property of each of the components printed. By combining this technology with
Rigid-flex PCB and computational origami, it is possible to create from a single
sheet of PCB almost any 3D shape that is already embedded with electronics, to
produce a finished product with that will be both structural and functional.

MIT Media Lab October 2012 Page 61


274. Polyphemus Neri Oxman and Benjamin Peters
Transport
This project was a weekend exploration of gyroscopic stabilization with application
to vehicle control and user interface. Using the well known inverted pendulum drive
NEW LISTING
system, a unicycle scooter was made from low-cost components. It's like a Segway,
but more dangerous/fun!

275. Rapid Craft Neri Oxman

The values endorsed by vernacular architecture have traditionally promoted designs


constructed and informed by and for the environment while using local knowledge
and indigenous materials. Under the imperatives and growing recognition of
sustainable design, Rapid Craft seeks the integration sought between local
construction techniques and globally available digital design technologies to
preserve, revive, and reshape these cultural traditions.

276. Raycounting Neri Oxman

Raycounting is a method for generating customized light-shading constructions by


registering the intensity and orientation of light rays within a given environment. 3D
surfaces of double curvature are the result of assigning light parameters to flat
planes. The algorithm calculates the intensity, position and direction of one, or
multiple, light sources placed in a given environment and assigns local curvature
values to each point in space corresponding to the reference plane and the light
dimension. Light performance analysis tools are reconstructed programmatically to
allow for morphological synthesis based on intensity, frequency and polarization of
light parameters as defined by the user.

277. Responsive Glass Neri Oxman, Elizabeth Tsai, and Michal Firstenberg

Hydrogels are crosslink polymers that are capable of absorbing great amount of
NEW LISTING
water. They have been studied during the last 50 years, largely due to their
hydrophilic character at ambient temperatures, which make them biocompatible and
attractive for various biological applications. Nevertheless, in our project, we are
interested in their hydrophilic-hydrophobic phase-transition, occurring slightly above
room temperature. We investigate the mechanical and optical transformations at
this phase transition–namely, their swelling, permeability, and optical transmission
modification–as enabling ‘responsive’ or ‘passive’ dynamics for future product
design.

278. Robotic Light Neri Oxman and Steven Keating


Expressions
We are exploring new modalities of creative photography through robotics and
long-exposure photography. Using a robotic arm, a light source is carried through
NEW LISTING
precise movements in front of a camera. Photographic compositions are recorded
as images of volumetric light. Robotic light “painting” can also be inverted: the
camera is moved via the arm to create an image “painted” with environmental light.
Finally, adding real-time sensor input to the moving arm and programming it to
explore the physical space around objects can reveal immaterial fields like radio
waves, magnetic fields, and heat flows.

279. Shape Memory Inkjet Neil Gershenfeld, Joseph M. Jacobson, Neri Oxman and Benjamin Peters

In most “drop-on-demand” inkjet control schemes, a superheated bubble of liquid is


NEW LISTING
used to propel a droplet or a piezoelectric crystal physically squeezes out a droplet
at high speeds. These models rely on a reservoir of print media that is always ‘open’
on one end for the droplet outlet. This makes the design of the system difficult for
two reasons: the pore has to be small enough to hold back low-viscosity liquids by
surface tension alone (~10um diameter), and the open nozzle leaves the ink
exposed and prone to drying out. We propose a new deposition mechanism based
around a nozzle that is ‘plugged’ by an actuating ‘stopper’ made of shape memory

Page 62 October 2012 MIT Media Lab


wire backed by a positive internal fluid pressure. When the wire is actuated, the
stopper is removed and the pressure of the fluid pushes one or more droplets out
until the stopper is replaced.

280. SpiderBot Neri Oxman and Benjamin Peters

The SpiderBot is a cable-suspended robotic gantry system that provides an easily


NEW LISTING
deployable platform from which to print large structures. The body composed of a
deposition nozzle, a reservoir of material, and parallel winching electric motors.
Cables from the robot are connected to stable points high in the environment, such
as large trees or buildings. This actuation arrangement is capable of moving large
distances without the need for more conventional linear guides, much like a spider
does. The system is easy to set up for mobile projects, and will afford sufficient
printing resolution and build volume. Expanding foam can be deposited to create a
building-scale printed object rapidly. Another material type of interest is the
extrusion or spinning of tension elements, like rope or cable. With tension elements,
unique structures such as bridges or webs can be wrapped, woven, or strung
around environmental features or previously printed materials.

281. Superconductive Neri Oxman and Benjamin Peters


Powder Purification
When synthesizing ceramic powders for use in high-temperature superconductors,
Device
the bulk fraction of the synthesized powder that is actually superconductive is often
low. In the specific case of YBaCuO 1-2-3 synthesis, the oxygen content of the
NEW LISTING sintered material is delicate (often destroyed by moisture) and critical to the
observation of superconductivity above 77K (N2 boiling point). An apparatus is
proposed that will preferentially filter out superconductive particles from
non-superconductive particles from a finely ground powder (~100 um). Filtered,
superconductive material will then be sintered together (or drawn into a
copper/brass carrying wire as is common with BSCCO) to yield a ceramic with
higher bulk fraction superconductivity. This apparatus would allow inexpensive
superconductors to be fabricated with loose tolerances/purities on starter chemicals
and firing apparatus.

Joseph Paradiso—Responsive Environments


How sensor networks augment and mediate human experience, interaction, and
perception.

282. A Machine Learning Joe Paradiso and Nick Gillian


Toolbox for Musician
The SEC is an extension to the free open-source program EyesWeb that contains a
Computer Interaction
large number of machine learning and signal processing algorithms that have been
specifically designed for real-time pattern and gesture recognition. All the algorithms
NEW LISTING within the SEC are encapsulated as individual blocks, allowing the user to connect
the output of one block to the input of another to create a signal flow chain. This
allows a user to quickly build and train their own custom gesture recognition system,
without having to write a single line of code or explicitly understand how any of the
machine learning algorithms within their recognition system work.

MIT Media Lab October 2012 Page 63


283. Beyond the Light Matthew Aldrich
Switch: New
Advances in building technology and sensor networks offer a chance to imagine
Frontiers in Dynamic
new forms of personalized and efficient utility control. One such area is lighting
Lighting control. With the aid of sensor networks, these new control systems not only offer
lower energy consumption, but also enable new ways to specify and augment
NEW LISTING lighting. It is our belief that dynamic lighting controlled by a single user, or even an
entire office floor, is the frontier of future intelligent and adaptive systems.

284. Chameleon Guitar: Joe Paradiso and Amit Zoran


Physical Heart in a
How can traditional values be embedded into a digital object? We explore this
Virtual Body
concept by implementing a special guitar that combines physical acoustic properties
with virtual capabilities. The acoustical values will be embodied by a wooden
heart—a unique, replaceable piece of wood that will give the guitar a unique sound.
The acoustic signal created by this wooden heart will be digitally processed in order
to create flexible sound design.

285. Customizable Joe Paradiso, Nan-Wei Gong and Nan Zhao


Sensate Surface for
We developed a music control surface which enables integration between any
Music Control
musical instruments via a versatile, customizable, and inexpensive user interface.
This sensate surface allows capacitive sensor electrodes and connections between
NEW LISTING electronics components to be printed onto a large roll of flexible substrate
unrestricted in length. The high dynamic range capacitive sensing electrodes can
not only infer touch, but near-range, non-contact gestural nuance in a music
performance. With this sensate surface, users can “cut” out their desired shapes,
“paste” the number of inputs, and customize their controller interfaces, which can
then send signals wirelessly to effects or software synthesizers. We seek to find a
solution for integrating the form factor of traditional music controllers seamlessly on
top of one’s instrument while adding expressiveness to performance by sensing and
incorporating movements and gestures to manipulate the musical output.

286. Data-Driven Elevator Joe Paradiso, Gershon Dublon, Nicholas Joliat, Brian Mayton and Ben Houge
Music (MIT Artist in Residence)

Our new building lets us see across spaces, extending our visual perception beyond
NEW LISTING
the walls that enclose us. Yet, invisibly, networks of sensors, from HVAC and
lighting systems to Twitter and RFID, control our environment and capture our
social dynamics. This project proposes extending our senses into this world of
information, imagining the building as glass in every sense. Sensor devices
distributed throughout the Lab transmit privacy-protected audio streams
and real-time measurements of motion, temperature, humidity, and light levels. The
data are composed into an eight-channel audio installation in the glass elevator that
turns these dynamic parameters into music, while microphone streams are
spatialized to simulate their real locations in the building. A pressure sensor in the
elevator provides us with fine-grained altitude to control the spatialization and
sonification. As visitors move from floor to floor, they hear the activities taking place
on each.

Page 64 October 2012 MIT Media Lab


287. Dense, Low-Power Nan-Wei Gong, Ashley Turza, David Way and Joe Paradiso with: Phil London,
Environmental Gary Ware, Brett Leida and Tim Ren (Schneider Electric); Leon Glicksman and
Steve Ray (MIT Building Technologies)
Monitoring for Smart
Energy Profiling We are working with sponsor Schneider Electric to deploying a dense, low-power
wireless sensor network aimed at environmental monitoring for smart energy
profiling. This distributed sensor system measures temperature, humidity, and 3D
airflow, and transmits this information through a wireless Zigbee protocol. These
sensing units are currently deployed in the lower atrium of E14. The data is being
used to inform CFD models of airflow in buildings, explore and retrieve valuable
information regarding the efficiency of commercial building HVAC systems, energy
efficiency of different building materials, and lighting choices in novel architectural
designs.

288. Digito: A Joe Paradiso and Nick Gillian


Fine-Grained,
Digito is a gesturally controlled virtual musical instrument, controlled through a
Gesturally Controlled
number of intricate hand gestures which provide both discrete and continuous
Virtual Musical control of its sound engine. The hand gestures are captured using a 3D depth
Instrument sensor and recognized using computer vision and machine learning algorithms.
Digito is currently being used to evaluate the possible strengths and limitations of
NEW LISTING
gesturally controlled virtual musical instruments and to assist in uncovering new
questions regarding the design of gestural musical interfaces.

289. DoppelLab: Joe Paradiso, Nicholas Joliat, Brian Mayton, Gershon Dublon, and Ben Houge
Spatialized (MIT Artist in Residence)
Sonification in a 3D
In DoppelLab, we are developing tools that intuitively and scalably represent the
Virtual Environment rich, multimodal sensor data produced by a building and its inhabitants. Our aims
transcend the traditional graphical display, in terms of the richness of data conveyed
NEW LISTING and the immersiveness of the user experience. To this end, we have incorporated
3D spatialized data sonification into the DoppelLab application, as well as in
standalone installations. Currently, we virtually spatialize streams of audio recorded
by nodes throughout the physical space. By reversing and shuffling short audio
segments, we distill the sound to its ambient essence while protecting occupant
privacy. In addition to the sampled audio, our work includes abstract data
sonification that conveys multimodal sensor data. As part of this work, we are
collaborating with the internationally active composer and MIT artist-in-residence
Ben Houge, towards new avenues for cross-reality data sonification and aleatoric
musical composition.

290. DoppelLab: Tools for Joe Paradiso, Gershon Dublon, Laurel Smith Pardue, Brian Mayton, Nicholas
Exploring and Joliat, and Noah Swartz
Harnessing
Homes and offices are being filled with sensor networks to answer specific queries
Multimodal Sensor and solve pre-determined problems, but no comprehensive visualization tools exist
Network Data for fusing these disparate data to examine relationships across spaces and sensing
modalities. DoppelLab is an immersive, cross-reality virtual environment that serves
as an active repository of the multimodal sensor data produced by a building and its
inhabitants. We transform architectural models into browsing environments for
real-time sensor data visualization and sonification, as well as open-ended
platforms for building audiovisual applications atop those data. These applications
in turn become sensor-driven interfaces to physical world actuation and control.
DoppelLab encompasses a set of tools for parsing, visualization, sonification, and
application development, and by organizing data by the space from which they
originate, DoppelLab provides a platform to make both broad and specific queries
about the activities, systems, and relationships in a complex, sensor-rich
environment.

MIT Media Lab October 2012 Page 65


291. Expressive Joe Paradiso, Nick Gillian and Laurel Smith Pardue
Re-Performance
Expressive musical re-performance is about enabling a person to experience the
creative aspects of a playing a favorite song regardless of technical expertise. This
is done by providing users with computer-linked electronic instruments that distills
the instruments' interface but still allows them to provide expressive gesture. The
next note in an audio source is triggered on the instrument, with the computer
providing correctly pitched audio and mapping the expressive content onto it. Thus,
the physicality of the instrument remains, but requires far less technique. We are
implementing an expressive re-performance system using commercially available,
expressive electronic musical instruments and an actual recording as the basis for
deriving audio. Performers will be able to select a voice within the recording and
re-perform the song with the targeted line subject to their own creative and
expressive impulse.

292. Feedback Controlled Joe Paradiso, Matthew Henry Aldrich and Nan Zhao
Solid State Lighting
At present, luminous efficacy and cost remain the greatest barriers to broad
adoption of LED lighting. However, it is anticipated that within several years, these
challenges will be overcome. While we may think our basic lighting needs have
been met, this technology offers many more opportunities than just energy
efficiency: this research attempts to alter our expectations for lighting and cast aside
our assumptions about control and performance. We will introduce new, low-cost
sensing modalities that are attuned to human factors such as user context,
circadian rhythms, or productivity, and integrate these data with atypical
environmental factors to move beyond traditional lux measurements. To research
and study these themes, we are focusing on the development of superior
color-rendering systems, new power topologies for LED control, and low-cost
multimodal sensor networks to monitor the lighting network as well as the
environment.

293. FreeD Joe Paradiso and Amit Zoran

The FreeD is a hand-held, digitally controlled, milling device that is guided and
monitored by a computer while still preserving the craftsperson's freedom to sculpt
and carve. The computer will intervene only when the milling bit approaches the
planned model. Its interaction is either by slowing down the spindle speed or by
drawing back the shaft; the rest of the time it allows complete freedom, letting the
user to manipulate and shape the work in any creative way.

294. Funk2: Causal Joe Paradiso and Bo Morgan


Reflective
Funk2 is a novel process-description language that keeps track of everything that it
Programming
does. Remembering these causal execution traces allows parallel threads to reflect,
recognize, and react to the history and status of other threads. Novel forms of
complex, adaptive, nonlinear control algorithms can be written in the Funk2
programming language. Currently, Funk2 is implemented to take advantage of
distributed grid processors consisting of a heterogeneous network of computers, so
that hundreds of thousands of parallel threads can be run concurrently, each using
many gigabytes of memory. Funk2 is inspired by Marvin Minsky's Critic-Selector
theory of human cognitive reflection.

295. Gesture Recognition Joe Paradiso and Nick Gillian


Toolkit
The Gesture Recognition Toolkit (GRT) is a cross-platform, open-source, c++
machine-learning library that has been specifically designed for real-time gesture
NEW LISTING
recognition. The GRT has been created as a general-purpose tool for allowing
programmers with little or no machine-learning experience to develop their own
machine-learning based recognition systems, through just a few lines of code.
Further, the GRT is designed to enable machine-learning experts to precisely
customize their own recognition systems, and easily incorporate their own

Page 66 October 2012 MIT Media Lab


algorithms within the GRT framework. In addition to facilitating developers to quickly
create their own gesture-recognition systems, the machine-learning algorithms at
the core of the GRT have been designed to be rapidly trained with a limited number
of training examples for each gesture. The GRT therefore allows a more diverse
group of users to easily integrate gesture recognition into their own projects.

296. Grassroots Mobile Joe Paradiso, Ethan Zuckerman, Pragun Goyal and Nathan Matias
Infrastructure
We want to help people in nations where electric power is scarce sell power to their
neighbors. We’re designing a piece of prototype hardware that plugs into a diesel
NEW LISTING
generator or other power source, distributes the power to multiple outlets, monitors
how much power is used, and uses mobile payments to charge the customer for the
power consumed.

297. Hackable, Joe Paradiso and Gershon Dublon


High-Bandwidth
The tongue has extremely dense sensing resolution, as well as an extraordinary
Sensory
degree of neuroplasticity (the ability to adapt to and internalize new input).
Augmentation Research has shown that electro-tactile tongue displays paired with cameras can
be used as vision prosthetics for the blind or visually impaired; users quickly learn to
NEW LISTING read and navigate through natural environments, and many describe the signals as
an innate sense. However, existing displays are expensive and difficult to adapt.
Tongueduino is an inexpensive, vinyl-cut tongue display designed to interface with
many types of sensors besides cameras. Connected to a magnetometer, for
example, the system provides a user with an internal sense of direction, like a
migratory bird. Piezo whiskers allow a user to sense orientation, wind, and the
lightest touch Through tongueduino, we hope to bring electro-tactile sensory
substitution beyond the discourse of vision replacement, toward open-ended
sensory augmentation.

298. Patchwerk: Joe Paradiso, Gershon Dublon, Nicholas Joliat and Brian Mayton
Multi-User Network
Patchwerk is a networked synthesizer module with tightly coupled web browser and
Control of a Massive
tangible interfaces. Patchwerk connects to a pre-existing modular synthesizer using
Modular Synth the emerging cross-platform HTML5 WebSocket standard to enable low-latency,
high-bandwidth, concurrent control of analog signals by multiple users. Online users
NEW LISTING control physical outputs on a custom-designed cabinet that reflects their activity
through a combination of motorized knobs and LEDs, and streams the resultant
audio. In a typical installation, a composer creates a complex physical patch on the
modular synth that exposes a set of analog and digital parameters (knobs, buttons,
toggles, and triggers) to the web-enabled cabinet. Both physically present and
online audiences can control those parameters, simultaneously seeing and hearing
the results of each other's actions. By enabling collaborative interaction with a
massive analog synthesizer, Patchwerk brings a broad audience closer to a rare
and historically important instrument.

299. Personal Video Joe Paradiso and Gershon Dublon


Layers for Privacy
We are developing an opt-in camera network, in which users carrying wearable tags
are visible to the network and everyone else is invisible. Existing systems for
configurable dynamic privacy in this context are opt-out and catch-all; users desiring
privacy carry pre-registered tags that disable sensing and networked media
services for everyone in the room. To address these issues, we separate video into
layers of flexible sprites representing each person in the field of view, and transmit
video of only those who opt-in. Our system can also define groups of users who can
be dialed in and out of the video stream dynamically. For cross-reality applications,
these dynamic layers achieve a new level of video granularity, allowing users and
groups to uncover correspondences between their activities across spaces.

MIT Media Lab October 2012 Page 67


300. Rapidnition: Rapid Joe Paradiso and Nick Gillian
User-Customizable
Rapidnition is a new way of thinking about gesturally controlled interfaces. Rather
Gesture Recognition
than forcing users to adapt their behavior to a predefined gestural interface,
Rapidnition frees users to define their own gestures, which the system rapidly
NEW LISTING learns. The machine learning algorithms at the core of Rapidnition enable it to
quickly infer a user’s gestural vocabulary, using a small number of
user-demonstrated examples of each gesture. Rapidnition is capable of recognizing
not just static postures but also dynamic temporal gestures. In addition, Rapidnition
allows the user to define complex, nonlinear, continuous-mapping spaces.
Rapidnition is currently being applied to the real-time recognition of musical
gestures to rigorously test both the discrete and continuous recognition abilities of
the system.

301. Scalable and Joe Paradiso, Nan-Wei Gong and Steve Hodges (Microsoft Research
Versatile Surface for Cambridge)
Ubiquitous Sensing
We demonstrate the design and implementation of a new versatile, scalable, and
cost-effective sensate surface. The system is based on a new conductive inkjet
NEW LISTING technology, which allows capacitive sensor electrodes and different types of RF
antennas to be cheaply printed onto a roll of flexible substrate that may be many
meters long. By deploying this surface on (or under) a floor it is possible to detect
the presence and whereabouts of users through both passive and active capacitive
coupling schemes. We have also incorporated GSM and NFC electromagnetic
radiation sensing and piezoelectric pressure and vibration detection. We believe
that this technology has the potential to change the way we think about covering
large areas with sensors and associated electronic circuitry–not just floors, but
potentially desktops, walls, and beyond.

302. TRUSS: Tracking Joe Paradiso, Gershon Dublon and Brian Dean Mayton
Risk with Ubiquitous
We are developing a system for inferring safety context on construction sites by
Smart Sensing
fusing data from wearable devices, distributed sensing infrastructure, and video.
Wearable sensors stream real-time levels of dangerous gases, dust, noise, light
quality, precise altitude, and motion to base stations that synchronize the mobile
devices, monitor the environment, and capture video. Context mined from these
data is used to highlight salient elements in the video stream for monitoring and
decision support in a control room. We tested our system in a initial user study on a
construction site, instrumenting a small number of steel workers and collecting data.
A recently completed hardware revision will be followed by further user testing and
interface development.

303. Virtual Messenger Joe Paradiso and Nick Gillian

The virtual messenger system acts as a portal to subtly communicate messages


NEW LISTING
and pass information between the digital, virtual, and physical worlds, using the
Media Lab’s Glass Infrastructure system. Users who opt into the system will be
tracked throughout the Media Lab by a multimodal sensor network. If a participating
user approaches any of the Lab’s Glass Infrastructure displays they will be met by
their virtual personal assistant (VPA), who exists in Dopplelab’s virtual
representation of the current physical space. Each VPA will act as the mediator who
will pass on any messages or important information from the digital world to the
user in the physical world. Participating users can interact with their VPA through a
small subset of hand gestures, allowing the user to read any pending messages or
notices, or inform their virtual avatar not to bother them until later.

Page 68 October 2012 MIT Media Lab


304. Wearable, Wireless Joe Paradiso, Michael Thomas Lapinski, Dr. Eric Berkson and MGH Sports
Sensor System for Medicine
Sports Medicine and
This project is a system of compact, wearable, wireless sensor nodes, equipped
Interactive Media with full six-degree-of-freedom inertial measurement units and node-to-node
capacitive proximity sensing. A high-bandwidth, channel-shared RF protocol has
been developed to acquire data from many (e.g., 25) of these sensors at 100 Hz
full-state update rates, and software is being developed to fuse this data into a
compact set of descriptive parameters in real time. A base station and central
computer clock the network and process received data. We aim to capture and
analyze the physical movements of multiple people in real time, using unobtrusive
sensors worn on the body. Applications abound in biomotion analysis, sports
medicine, health monitoring, interactive exercise, immersive gaming, and interactive
dance ensemble performance.

Alumni Contributors: Ryan Aylward and Mathew Laibowitz

305. WristQue: A Personal Joe Paradiso and Brian Mayton


Wristband for
While many wearable sensors have been developed, few are actually worn by
Sensing and Smart
people on a regular basis. WristQue is a wristband sensor that is comfortable and
Infrastructure customizable to encourage widespread adoption. The hardware is 3D printable,
giving users a choice of materials and colors. Internally, the wristband will include a
NEW LISTING main board with microprocessor, standard sensors, and localization/wireless
communication, and an additional expansion board that can be replaced to
customize functionality of the device for a wide variety of applications.
Environmental sensors (temperature, humidity, light) combined with fine-grained
indoor localization will enable smarter building infrastructure, allowing HVAC and
lighting systems to optimize to the locations and ways that people are actually using
the space. Users' preferences can be input through buttons on the wristband.
Fine-grained localization also opens up possibilities for larger applications, such as
visualizing building usage through DoppelLab and smart displays that react to
users' presence.

Alex 'Sandy' Pentland—Human Dynamics


How social networks can influence our lives in business, health, and governance, as
well as technology adoption and diffusion.

306. Economic Alex (Sandy) Pentland, Yaniv Altshuler, Katherine Krumme and Wei Pan
Decision-Making in
We are using credit card transaction data and trading data we look at patterns of
the Wild
human behavior change over time and space, and how these change with social
influence and with macroeconomic features. To what extent do network features
help to predict economic ones?

307. Funf: Open Sensing Alex (Sandy) Pentland, Nadav Aharony, Wei Pan, Cody Sumter and Alan
Framework Gardner

The Funf open sensing framework is an Android-based extensible framework for


NEW LISTING
phone-based mobile sensing. The core concept is to provide a reusable set of
functionalities enabling collection, uploading, and configuration for a wide range of
data types. Funf Journal is an Android application for researchers, self-trackers, and
anyone interested in collecting and exploring information related to the mobile
device, its environment, and its user's behavior. It is built using the Funf framework
and makes use of many of its built-in features.

MIT Media Lab October 2012 Page 69


308. openPDS: A Henrick Sandell, Jeff Schmitz, Alex (Sandy) Pentland, Yves-Alexandre de
Privacy-Preserving Montjoye and Brian Sweatt
Personal Data Store
With their built-in sensors, smart phones are at the forefront of personal data
collection. However, personal data currently tends to be monopolized and siloed
preventing companies from building innovative data-driven services. While there is
substantial work on privacy and fair use of personal data, a pragmatic technical
solution has yet to be realized. openPDS is a privacy-preserving implementation of
an information repository which allows the user to collect, store, and give access to
his data. Via an innovative framework for third-party applications to be installed, the
system ensures that the sensitive data processing takes place within the user's
PDS, as opposed to a third-party server. The framework allows for PDSs to engage
in privacy-preserving group computation, which is used as a replacement for
centralized aggregation.

309. Sensible Alex (Sandy) Pentland, Benjamin Waber and Daniel Olguin Olguin
Organizations
Data mining of email has provided important insights into how organizations
function and what management practices lead to greater productivity. But important
communications are almost always face-to-face, so we are missing the greater part
of the picture. Today, however, people carry cell phones and wear RFID badges.
These body-worn sensor networks mean that we can potentially know who talks to
whom, and even how they talk to each other. Sensible Organizations investigates
how these new technologies for sensing human interaction can be used to reinvent
organizations and management.

310. Social Signals in Max Little


Biomedicine
We are using non-invasive measurement of social signals found in voice, body
movement, and location to quantify symptoms in neurological disorders such as
Parkinson's Disease.

Rosalind W. Picard—Affective Computing


How new technologies can help people better communicate, understand, and respond
to affective information.

311. Analysis of Akane Sano, Rosalind W. Picard, Suzanne E. Goldman, Beth A. Malow
Autonomic Sleep (Vanderbilt) Rana el Kaliouby, and Robert Stickgold (Harvard)
Patterns
We are examining autonomic sleep patterns using a wrist-worn biosensor that
enables comfortable measurement of skin conductance, skin temperature, and
motion. The skin conductance reflects sympathetic arousal. We are looking at sleep
patterns in healthy groups, in groups with autism, and in groups with sleep
disorders. We are looking especially at sleep quality and at performance on learning
and memory tasks.

312. Auditory Rosalind W. Picard, Matthew Goodwin and Rob Morris


Desensitization
Persons on the autism spectrum often report hypersensitivity to sound. Efforts have
Games
been made to manage this condition, but there is wide room for improvement. One
approach—exposure therapy—has promise, and a recent study showed that it
helped several individuals diagnosed with autism overcome their sound sensitivities.
In this project, we borrow principles from exposure therapy, and use fun, engaging,
games to help individuals gradually get used to sounds that they might ordinarily
find frightening or painful.

Page 70 October 2012 MIT Media Lab


313. Automatic Stress Rosalind W. Picard, Robert Randall Morris and Javier Hernandez Rivera
Recognition in
Technologies to automatically recognize stress, are extremely important to prevent
Real-Life Settings
chronic psychological stress and the pathophysiological risks associated to it. The
introduction of comfortable and wearable biosensors have created new
opportunities to measure stress in real-life environments, but there is often great
variability in how people experience stress and how they express it physiologically.
In this project, we modify the loss function of Support Vector Machines to encode a
person's tendency to feel more or less stressed, and give more importance to the
training samples of the most similar subjects. These changes are validated in a
case study where skin conductance was monitored in nine call center employees
during one week of their regular work. Employees working in this type of settings
usually handle high volumes of calls every day, and they frequently interact with
angry and frustrated customers that lead to high stress levels.

314. Cardiocam Ming-Zher Poh, Daniel McDuff and Rosalind W. Picard

Cardiocam is a low-cost, non-contact technology for measurement of physiological


signals such as heart rate and breathing rate using a basic digital imaging device
such as a webcam. The ability to perform remote measurements of vital signs is
promising for enhancing the delivery of primary health care.

315. CrowdCounsel Rosalind W. Picard and Robert Morris

Efforts to build emotionally responsive forms of artificial intelligence have been


hampered by many difficulties, not least of which include the challenges of natural
language processing. Although there have been many gains in this domain, it is still
difficult to build technologies that offer nuanced forms of emotional support. To
address these challenges, researchers might look towards human computation – an
approach that harnesses the power of large, distributed online communities to solve
artificial intelligence problems that might otherwise be intractable. We present a new
technological approach that uses human computation algorithms, in conjunction
with on-demand online workforces, to provide expedient emotional support.

316. Customized Rosalind W. Picard and Rob Morris


Computer-Mediated
Individuals diagnosed with autism spectrum disorder (ASD) often have intense,
Interventions
focused interests. These interests, when harnessed properly, can help motivate an
individual to persist in a task that might otherwise be too challenging or bothersome.
For example, past research has shown that embedding focused interests into
educational curricula can increase task adherence and task performance in
individuals with ASD. However, providing this degree of customization is often
time-consuming and costly and, in the case of computer-mediated interventions,
high-level computer-programming skills are often required. We have recently
designed new software to solve this problem. Specifically, we have built an
algorithm that will: (1) retrieve user-specified images from the Google database; (2)
strip them of their background; and (3) embed them seamlessly into Flash-based
computer programs.

317. Emotion and Memory Daniel McDuff, Rana el Kaliouby and Rosalind Picard

Have you ever wondered what makes an ad memorable? We have performed a


NEW LISTING
comprehensive review of literature concerning advertising, memory, and emotion. A
summary of results are available.

MIT Media Lab October 2012 Page 71


318. Evaluation Tool for Rosalind W. Picard
Recognition of
To help people improve their reading of faces during natural conversations, we
Social-Emotional
developed a video tool to evaluate this skill. We collected over 100 videos of
Expressions from conversations between pairs of both autistic and neurotypical people, each wearing
Facial-Head a Self-Cam. The videos were manually segmented into chunks of 7-20 seconds
Movements according to expressive content, labeled, and sorted by difficulty—all tasks we plan
to automate using technologies under development. Next, we built a rating interface
including videos of self, peers, familiar adults, strangers, and unknown actors,
allowing for performance comparisons across conditions of familiarity and
expression. We obtained reliable identification (by coders) of categories of smiling,
happy, interested, thinking, and unsure in the segmented videos. The tool was
finally used to assess recognition of these five categories for eight neurotypical and
five autistic people. Results show some autistics approaching the abilities of
neurotypicals while several score just above random.

Alumni Contributor: Alea Teeters

319. Exploring Temporal Rosalind W. Picard and Mohammed Ehasanul Hoque


Patterns of Smile
A smile is a multi-purpose expression. We smile to express rapport, polite
disagreement, delight, sarcasm, and often, even frustration. Is it possible to develop
NEW LISTING
computational models to distinguish among smiling instances when delighted,
frustrated or just being polite? In our ongoing work, we demonstrate that it is useful
to explore how the patterns of smile evolve through time, and that while a smile may
occur in positive and in negative situations, its dynamics may help to disambiguate
the underlying state.

320. Externalization Rosalind W. Picard, Matthew Goodwin and Jackie Chia-Hsun Lee
Toolkit
We propose a set of customizable, easy-to-understand, and low-cost physiological
toolkits in order to enable people to visualize and utilize autonomic arousal
information. In particular, we aim for the toolkits to be usable in one of the most
challenging usability conditions: helping individuals diagnosed with autism. This
toolkit includes: wearable, wireless, heart-rate and skin-conductance sensors;
pendant-like and hand-held physiological indicators hidden or embedded into
certain toys or tools; and a customized software interface that allows caregivers and
parents to establish a general understanding of an individual's arousal profile from
daily life and to set up physiological alarms for events of interest. We are evaluating
the ability of this externalization toolkit to help individuals on the autism spectrum to
better communicate their internal states to trusted teachers and family members.

321. FaceSense: Daniel McDuff, Rana el Kaliouby, Abdelrahman Nasser Mahmoud, Youssef
Affective-Cognitive Kashef, M. Ehsan Hoque, Matthew Goodwin and Rosalind W. Picard
State Inference from
People express and communicate their mental states—such as emotions, thoughts,
Facial Video and desires—through facial expressions, vocal nuances, gestures, and other
non-verbal channels. We have developed a computational model that enables
real-time analysis, tagging, and inference of cognitive-affective mental states from
facial video. This framework combines bottom-up, vision-based processing of the
face (e.g., a head nod or smile) with top-down predictions of mental-state models
(e.g., interest and confusion) to interpret the meaning underlying head and facial
signals over time. Our system tags facial expressions, head gestures, and
affective-cognitive states at multiple spatial and temporal granularities in real time
and offline, in both natural human-human and human-computer interaction contexts.
A version of this system is being made available commercially by Media Lab
spin-off Affectiva, indexing emotion from faces. Applications range from measuring
people's experiences to a training tool for autism spectrum disorders and people
who are nonverbal learning disabled.

Alumni Contributor: Miriam A Madsen

Page 72 October 2012 MIT Media Lab


322. Facial Expression Rosalind W. Picard, Rana el Kaliouby, Daniel Jonathan McDuff, Affectiva and
Analysis Over the Forbes
Web
We present the first project analyzing facial expressions over the internet. The
interface analyzes the participants' smile intensity as they watch popular
commercials. They can compare their responses to an aggregate from the larger
population. The system also allows us to crowd-source data for training expression
recognition systems.

323. FEEL: Frequent EDA Yadid Ayzenberg and Rosalind Picard


Event Logger
Have you ever wondered which emails, phone calls, or meetings cause you the
most stress or anxiousness? Well, now you can find out. A wristband sensor
NEW LISTING
measures electrodermal activity (EDA), which responds to stress, anxiety, and
arousal. Each time you read an email, place a call, or hold a meeting, your phone
will measure your EDA levels by connecting to the sensor via Bluetooth. The goal is
to design a tool that enables the user to attribute levels of stress and anxiety to
particular events. FEEL allows the user to view all of the events and the levels of
EDA that are associated with them: with FEEL, users can see which event caused a
higher level of anxiety and stress, and can view which part of an event caused the
greatest reaction. Users can also view EDA levels in real time.

324. Frame It Rosalind W. Picard and Micah Eckhardt

Frame It is an interactive, blended, tangible-digital puzzle game intended as a


play-centered teaching and therapeutic tool. Current work is focused on the
development of a social-signals puzzle game for children with autism that will help
them recognize social-emotional cues from information surrounding the eyes. In
addition, we are investigating if this play-centered therapy results in the children
becoming less averse to direct eye contact with others. The study uses eye-tracking
technology to measure gaze behavior while participants are exposed to images and
videos of social settings and expressions. Results indicate that significant changes
in expression recognition and social gaze are possible after repeated uses of the
Frame It game platform.

325. Gesture Guitar Rosalind W. Picard, Rob Morris and Tod Machover

Emotions are often conveyed through gesture. Instruments that respond to gestures
offer musicians new, exciting modes of musical expression. This project gives
musicians wireless, gestural-based control over guitar effects parameters.

326. IDA: Inexpensive Yadid Ayzenberg


Networked Digital
Complex and expensive medical devices are mainly used in medical facilities by
Stethoscope
health professionals. IDA is an attempt to disrupt this paradigm and introduce a new
type of device: easy to use, low cost, and open source. It is a digital stethoscope
that can be connected to the Internet for streaming the physiological data to remote
clinicians. Designed to be fabricated anywhere in the world with minimal equipment,
it can be operated by individuals without medical training.

327. Infant Monitoring and Rana el Kaliouby, Rich Fletcher, Matthew Goodwin and Rosalind W. Picard
Communication
We have been developing comfortable, safe, attractive physiological sensors that
infants can wear around the clock to wirelessly communicate their internal
physiological state changes. The sensors capture sympathetic nervous system
arousal, temperature, physical activity, and other physiological indications that can
be processed to signal changes in sleep, arousal, discomfort or distress, all of which
are important for helping parents better understand the internal state of their child
and what things stress or soothe their baby. The technology can also be used to
collect physiological and circadian patterns of data in infants at risk for
developmental disabilities.

MIT Media Lab October 2012 Page 73


328. Long-Term Physio Akane Sano and Rosalind Picard
and Behavioral Data
Most of the time, healthy people feel fine, but sometimes they are tired or have
Analysis
colds. We all have fluctuations of our physical and mental health, but how can we
predict the condition of the next step and how can we leverage data from the
NEW LISTING healthy population to prevent disease? We analyze long-term multi-modal data
(electro-dermal activity, skin temperature, and accelerometer) during day and night
with wearable sensors to extract bio-markers related to health conditions, interpret
inter-individual differences, and develop systems to keep people healthy.

329. Machine Learning Hyungil Ahn and Rosalind W. Picard


and Pattern
This project develops new theory and algorithms to enable computers to make rapid
Recognition with
and accurate inferences from multiple modes of data, such as determining a
Multiple Modalities person's affective state from multiple sensors—video, mouse behavior, chair
pressure patterns, typed selections, or physiology. Recent efforts focus on
understanding the level of a person's attention, useful for things such as
determining when to interrupt. Our approach is Bayesian: formulating probabilistic
models on the basis of domain knowledge and training data, and then performing
inference according to the rules of probability theory. This type of sensor fusion
work is especially challenging due to problems of sensor channel drop-out, different
kinds of noise in different channels, dependence between channels, scarce and
sometimes inaccurate labels, and patterns to detect that are inherently time-varying.
We have constructed a variety of new algorithms for solving these problems and
demonstrated their performance gains over other state-of-the-art methods.

Alumni Contributor: Ashish Kapoor

330. Measuring Arousal Rosalind W. Picard and Elliott Hedman


During Therapy for
Physiological arousal is an important part of occupational therapy for children with
Children with Autism
autism and ADHD, but therapists do not have a way to objectively measure how
and ADHD therapy affects arousal. We hypothesize that when children participate in guided
activities within an occupational therapy setting, informative changes in
electrodermal activity (EDA) can be detected using iCalm. iCalm is a small, wireless
sensor that measures EDA and motion, worn on the wrist or above the ankle.
Statistical analysis describing how equipment affects EDA was inconclusive,
suggesting that many factors play a role in how a child’s EDA changes. Case
studies provided examples of how occupational therapy affected children’s EDA.
This is the first study of the effects of occupational therapy’s in situ activities using
continuous physiologic measures. The results suggest that careful case study
analyses of the relation between therapeutic activities and physiological arousal
may inform clinical practice.

331. Measuring Customer Rosalind W. Picard and Elliott Hedman


Experiences with
How can we better understand people’s emotional experiences with a product or
Arousal
service? Traditional interview methods require people to remember their emotional
state, which is difficult. We use psychophysiological measurements such as heart
rate and skin conductance to map people’s emotional changes across time. We
then interview people about times when their emotions changed, in order to gain
insight into the experiences that corresponded with the emotional changes. This
method has been used to generate hundreds of insights with a variety of products
including games, interfaces, therapeutic activities, and self-driving cars.

Page 74 October 2012 MIT Media Lab


332. Mobile Health Rich Fletcher and Rosalind Picard
Interventions for
We are developing a mobile phone-based platform to assist people with chronic
Drug Addiction and
diseases, panic-anxiety disorders or addictions. Making use of wearable, wireless
PTSD biosensors, the mobile phone uses pattern analysis and machine learning
algorithms to detect specific physiological states and perform automatic
interventions in the form of text/images plus sound files and social networking
elements. We are currently working with the Veterans Administration drug
rehabilitation program involving veterans with PTSD.

333. Multimodal David Forsyth (UIUC), Gregory Abowd (GA Tech), Jim Rehg (GA Tech), Shri
Computational Narayanan (USC), Rana el Kaliouby, Matthew Goodwin, Rosalind W. Picard,
Javier Hernandez Rivera, Stan Scarloff (BU) and Takeo Kanade (CMU)
Behavior Analysis
This project will define and explore a new research area we call Computational
Behavior Science–integrated technologies for multimodal computational sensing
and modeling to capture, measure, analyze, and understand human behaviors. Our
motivating goal is to revolutionize diagnosis and treatment of behavioral and
developmental disorders. Our thesis is that emerging sensing and interpretation
capabilities in vision, audition, and wearable computing technologies, when further
developed and properly integrated, will transform this vision into reality. More
specifically, we hope to: (1) enable widespread autism screening by allowing
non-experts to easily collect high-quality behavioral data and perform initial
assessment of risk status; (2) improve behavioral therapy through increased
availability and improved quality, by making it easier to track the progress of an
intervention and follow guidelines for maximizing learning progress; and (3) enable
longitudinal analysis of a child's development based on quantitative behavioral data,
using new tools for visualization.

334. Sensor-Enabled Matthew Goodwin, Clark Freifeld and Sophia Yuditskaya


Measurement of
A small number of studies support the notion of a functional relationship between
Stereotypy and
movement stereotypy and arousal in individuals with ASD, such that changes in
Arousal in Individuals autonomic activity either precede or are a consequence of engaging in stereotypical
with Autism motor movements. Unfortunately, it is difficult to generalize these findings as
previous studies fail to report reliability statistics that demonstrate accurate
identification of movement stereotypy start and end times, and use autonomic
monitors that are obtrusive and thus only suitable for short-term measurement in
laboratory settings. The current investigation further explores the relationship
between movement stereotypy and autonomic activity in persons with autism by
combining state-of-the-art ambulatory heart rate monitors to objectively assess
arousal across settings; and wireless, wearable motion sensors and pattern
recognition software that can automatically and reliably detect stereotypical motor
movements in individuals with autism in real time.

335. Social + Sleep + Akane Sano and Rosalind Picard


Moods
Sleep is critical to a wide range of biological functions; inadequate sleep results in
impaired cognitive performance and mood, and adverse health outcomes including
NEW LISTING
obesity, diabetes, and cardiovascular disease. Recent studies have shown that
healthy and unhealthy sleep behaviors can be transmitted by social interactions
between individuals within social networks. We investigate how social connectivity
and light exposure influence sleep patterns and their health and performance. Using
multimodal data collected from closely connected MIT undergraduates with
wearable sensors and mobile phones, we will develop the statistical and multi-scale
mathematical models of sleep dynamics within social networks based on sleep and
circadian physiology. These models will provide insights into the emergent
dynamics of sleep behaviors within social networks, and allow us to test the effects
of candidate strategies for intervening in populations with unhealthy sleep
behaviors.

MIT Media Lab October 2012 Page 75


336. StoryScape Rosalind W. Picard and Micah Eckhardt

StoryScape is a social illustrated primer. The StoryScape platform is being


NEW LISTING
developed to allow for easy creation of highly interactive and customizable stories.
In addition, the platform will allow a community of content creators to easily share,
collaborate, and remix each others' works. Experimental goals of StoryScape
include its use with children diagnosed with autism who are minimally verbal or
non-verbal. We seek to test our interaction paradigm and personalization feature to
determine if multi-modal interactive and customizable stories influence language
acquisition and expression.

337. The Frustration of Rosalind W. Picard and Elliott Hedman


Learning Monopoly
We are looking at the emotional experience created when children learn games.
Why do we start games with the most boring part, reading directions? How can we
NEW LISTING
create a product that does not create an abundance of work for parents? Key
insights generated from field work, interviews, and measurement of electrodermal
activity are: kids become bored listening to directions, "it's like going to school";
parents feel rushed reading directions as they sense their children's boredom;
children and parents struggle for power in interpreting and enforcing rules; children
learn games by mimicking their parents, and; children enjoy the challenge of
learning new games.

Ramesh Raskar—Camera Culture


How to create new ways to capture and share visual information.

338. 6D Display Ramesh Raskar, Martin Fuchs, Hans-Peter Seidel, and Hendrik P. A. Lensch

Is it possible to create passive displays that respond to changes in viewpoint and


NEW LISTING
incident light conditions? Holograms and 4D displays respond to changes in
viewpoint. 6D displays respond to changes in viewpoint as well as surrounding light.
We encode the 6D reflectance field into an ordinary 2D film. These displays are
completely passive and do not require any power. Applications include novel
instruction manuals and mood lights.

339. Bokode: Ramesh Raskar, Ankit Mohan, Grace Woo, Shinsaku Hiura and Quinn
Imperceptible Visual Smithwick
Tags for
With over a billion people carrying camera-phones worldwide, we have a new
Camera-Based opportunity to upgrade the classic bar code to encourage a flexible interface
Interaction from a between the machine world and the human world. Current bar codes must be read
Distance within a short range and the codes occupy valuable space on products. We present
a new, low-cost, passive optical design so that bar codes can be shrunk to fewer
than 3mm and can be read by unmodified ordinary cameras several meters away.

340. CATRA: Mapping of Ramesh Raskar, Vitor Pamplona, Erick Passos, Jan Zizka, Jason Boggess,
Cataract Opacities David Schafran, Manuel M. Oliveira, Everett Lawson, and Estebam Clua
Through an
We introduce a novel interactive method to assess cataracts in the human eye by
Interactive Approach crafting an optical solution that measures the perceptual impact of forward
scattering on the foveal region. Current solutions rely on highly trained clinicians to
check the back scattering in the crystallin lens and test their predictions on visual
acuity tests. Close-range parallax barriers create collimated beams of light to scan
through sub-apertures scattering light as it strikes a cataract. User feedback
generates maps for opacity, attenuation, contrast, and local point-spread functions.
The goal is to allow a general audience to operate a portable, high-contrast,

Page 76 October 2012 MIT Media Lab


light-field display to gain a meaningful understanding of their own visual conditions.
The compiled maps are used to reconstruct the cataract-affected view of an
individual, offering a unique approach for capturing information for screening,
diagnostic, and clinical analysis.

341. Coded Computational Jaewon Kim, Ahmed Kirmani, Ankit Mohan and Ramesh Raskar
Photography
Computational photography is an emerging multi-disciplinary field that is at the
intersection of optics, signal processing, computer graphics and vision, electronics,
art, and online sharing in social networks. The first phase of computational
photography was about building a super-camera that has enhanced performance in
terms of the traditional parameters, such as dynamic range, field of view, or depth of
field. We call this 'Epsilon Photography.' The next phase of computational
photography is building tools that go beyond the capabilities of this super-camera.
We call this 'Coded Photography.' We can code exposure, aperture, motion,
wavelength, and illumination. By blocking light over time or space, we can preserve
more details about the scene in the recorded single photograph.

342. Compressive Sensing Ramesh Raskar, Kshitij Marwah and Ashok Veeraraghavan (MERL)
for Visual Signals
Research in computer vision is riding a new tide called compressive sensing.
Carefully designed capture methods exploit the sparsity of the underlying signal in a
transformed domain to reduce the number of measurements and use an
appropriate reconstruction method. Traditional progressive methods capture
successively more detail using sequence of simple projection basis whereas
random projections do not use any sequence except l0 minimization for
reconstruction which is computationally in-efficient. Here, we question this new tide
and claim for most situations simple methods work better and the best projective
method would be in between the two extremes.

Alumni Contributor: Rohit Pandharkar

343. Layered 3D: Gordon Wetzstein, Douglas Lanman, Matthew Hirsch, Wolfgang Heidrich, and
Glasses-Free 3D Ramesh Raskar
Printing
We develop tomographic techniques for image synthesis on displays composed of
compact volumes of light-attenuating material. Such volumetric attenuators recreate
NEW LISTING a 4D light field or high-contrast 2D image when illuminated by a uniform backlight.
Since arbitrary views may be inconsistent with any single attenuator, iterative
tomographic reconstruction minimizes the difference between the emitted and target
light fields, subject to physical constraints on attenuation. For 3D displays, spatial
resolution, depth of field, and brightness are increased, compared to parallax
barriers. We conclude by demonstrating the benefits and limitations of
attenuation-based light field displays using an inexpensive fabrication method:
separating multiple printed transparencies with acrylic sheets.

344. LensChat: Sharing Ramesh Raskar, Rob Gens and Wei-Chao Chen
Photos with
With networked cameras in everyone's pockets, we are exploring the practical and
Strangers
creative possibilities of public imaging. LensChat allows cameras to communicate
with each other using trusted optical communications, allowing users to share
photos with a friend by taking pictures of each other, or borrow the perspective and
abilities of many cameras.

MIT Media Lab October 2012 Page 77


345. Looking Around Ramesh Raskar, Andrew Bardagjy, Otkrist Gupta, Andreas Velten and Moungi
Corners Bawendi

Using a femtosecond laser and a camera with a time resolution of about one trillion
frames per second, we can capture movies of light as it moves through a scene,
gets trapped inside a tomato, or bounces off the surfaces in a bottle of water. We
use this ability to see the time of flight and to reconstruct images of objects that our
camera can not see directly (i.e., to look around the corner).

Alumni Contributor: Di Wu

346. NETRA: Smartphone Vitor Pamplona, Manuel Oliveira, Erick Passos, Ankit Mohan, David Schafran,
Add-On for Eye Tests Jason Boggess and Ramesh Raskar

Can a person look at a portable display, click on a few buttons, and recover his
refractive condition? Our optometry solution combines inexpensive optical elements
and interactive software components to create a new optometry device suitable for
developing countries. The technology allows for early, extremely low-cost, mobile,
fast, and automated diagnosis of the most common refractive eye disorders: myopia
(nearsightedness), hypermetropia (farsightedness), astigmatism, and presbyopia
(age-related visual impairment). The patient overlaps lines in up to eight meridians
and the Android app computes the prescription. The average accuracy is
comparable to the prior art—and in some cases, even better. We propose the use
of our technology as a self-evaluation tool for use in homes, schools, and at health
centers in developing countries, and in places where an optometrist is not available
or is too expensive.

347. PhotoCloud: Ramesh Raskar, Aydin Arpa, Otkrist Gupta and Gabriel Taubin
Personal to Shared
We present a near real-time system for interactively exploring a collectively
Moments with Angled
captured moment without explicit 3D reconstruction. Our system favors immediacy
Graphs of Pictures and local coherency to global consistency. It is common to represent photos as
vertices of a weighted graph. The weighted angled graphs of photos used in this
NEW LISTING work can be regarded as the result of discretizing the Riemannian geometry of the
high dimensional manifold of all possible photos. Ultimately, our system enables
everyday people to take advantage of each others' perspectives in order to create
on-the-spot spatiotemporal visual experiences similar to the popular bullet-time
sequence. We believe that this type of application will greatly enhance shared
human experiences spanning from events as personal as parents watching their
children's football game to highly publicized red-carpet galas.

348. Polarization Fields: Douglas Lanman, Gordon Wetzstein, Matthew Hirsch, Wolfgang Heidrich, and
Glasses-Free 3DTV Ramesh Raskar

We introduce polarization field displays as an optically efficient design for dynamic


NEW LISTING
light field display using multi-layered LCDs. Such displays consist of a stacked set
of liquid crystal panels with a single pair of crossed linear polarizers. Each layer is
modeled as a spatially controllable polarization rotator, as opposed to a
conventional spatial light modulator that directly attenuates light. We demonstrate
that such displays can be controlled, at interactive refresh rates, by adopting the
SART algorithm to tomographically solve for the optimal spatially varying
polarization state rotations applied by each layer. We validate our design by
constructing a prototype using modified off- the-shelf panels. We demonstrate
interactive display using a GPU-based SART implementation supporting both
polarization-based and attenuation-based architectures.

Page 78 October 2012 MIT Media Lab


349. Portable Retinal Everett Lawson, Jason Boggess, Alex Olwal, Gordon Wetzstein, and
Imaging Siddharth Khullar

The major challenge in preventing blindness is identifying patients and bringing


NEW LISTING
them to specialty care. Diseases that affect the retina, the image sensor in the
human eye, are particularly challenging to address, because they require highly
trained eye specialists (ophthalmologists) who use expensive equipment to
visualize the inner parts of the eye. Diabetic retinopathy, HIV/AIDS related retinitis,
and age-related macular degeneration are three conditions that can be screened
and diagnosed to prevent blindness caused by damage to retina. We exploit a
combination of two novel ideas which simplify the constraints of traditional devices,
with simplified optics and cleaver illumination in order to capture and visualize
images of the retina in a standalone device easily operated by the user. Prototypes
are conveniently embedded in either a mobile hand-held retinal camera, or
wearable eye-glasses.

350. Reflectance Ramesh Raskar and Nikhil Naik


Acquisition Using
We demonstrate a new technique that allows a camera to rapidly acquire
Ultrafast Imaging
reflectance properties of objects 'in the wild' from a single viewpoint, over relatively
long distances and without encircling equipment. This project has a wide variety of
applications in computer graphics including image relighting, material identification,
and image editing.

351. Second Skin: Motion Ramesh Raskar, Kenichiro Fukushi, Christopher Schonauer and Jan Zizka
Capture with
We have created a 3D motion-tracking system with an automatic, real-time
Actuated Feedback
vibrotactile feedback with an assembly of photo-sensors, infrared projector pairs,
for Motor Learning vibration motors, and wearable suit. This system allows us to enhance and quicken
the motor learning process in variety of fields such as healthcare (physiotherapy),
entertainment (dance), and sports (martial arts).

Alumni Contributor: Dennis Ryan Miaw

352. Shield Field Imaging Jaewon Kim

We present a new method for scanning 3D objects in a single shot, shadow-based


method. We decouple 3D occluders from 4D illumination using shield fields: the 4D
attenuation function which acts on any light field incident on an occluder. We then
analyze occluder reconstruction from cast shadows, leading to a single-shot light
field camera for visual hull reconstruction.

353. Single Lens Off-Chip Ramesh Raskar and Aydin Arpa


Cellphone
Within the last few years, cellphone subscriptions have widely spread and now
Microscopy
cover even the remotest parts of the planet. Adequate access to healthcare,
however, is not widely available, especially in developing countries. We propose a
NEW LISTING new approach to converting cellphones into low-cost scientific devices for
microscopy. Cellphone microscopes have the potential to revolutionize
health-related screening and analysis for a variety of applications, including blood
and water tests. Our optical system is more flexible than previously proposed
mobile microscopes, and allows for wide field of view panoramic imaging, the
acquisition of parallax, and coded background illumination, which optically
enhances the contrast of transparent and refractive specimens.

354. Slow Display Daniel Saakes, Kevin Chiu, Tyler Hutchison, Biyeun Buczyk, Naoya Koizumi
and Masahiko Inami

How can we show our 16 megapixel photos from our latest trip on a digital display?
How can we create screens that are visible in direct sunlight as well as complete
darkness? How can we create large displays that consume less than 2W of power?

MIT Media Lab October 2012 Page 79


How can we create design tools for digital decal application and intuitive-computer
aided modeling? We introduce a display that is high resolution but updates at a low
frame rate, a slow display. We use lasers and monostable light-reactive materials to
provide programmable space-time resolution. This refreshable, high resolution
display exploits the time decay of monostable materials, making it attractive in terms
of cost and power requirements. Our effort to repurpose these materials involves
solving underlying problems in color reproduction, day- night visibility, and optimal
time sequences for updating content.

355. SpeckleSense Alex Olwal, Andrew Bardagjy, Jan Zizka and Ramesh Raskar

Motion sensing is of fundamental importance for user interfaces and input devices.
NEW LISTING
In applications where optical sensing is preferred, traditional camera-based
approaches can be prohibitive due to limited resolution, low frame rates, and the
required computational power for image processing. We introduce a novel set of
motion-sensing configurations based on laser speckle sensing that are particularly
suitable for human-computer interaction. The underlying principles allow these
configurations to be fast, precise, extremely compact, and low cost.

356. Tensor Displays: Gordon Wetzstein, Douglas Lanman, Matthew Hirsch and Ramesh Raskar
High-Quality
We introduce tensor displays: a family of glasses-free 3D displays comprising all
Glasses-Free 3D TV
architectures employing (a stack of) time-multiplexed LCDs illuminated by uniform
or directional backlighting. We introduce a unified optimization framework that
NEW LISTING encompasses all tensor display architectures and allows for optimal glasses-free 3D
display. We demonstrate the benefits of tensor displays by constructing a
reconfigurable prototype using modified LCD panels and a custom integral imaging
backlight. Our efficient, GPU-based NTF implementation enables interactive
applications. In our experiments we show that tensor displays reveal practical
architectures with greater depths of field, wider fields of view, and thinner form
factors, compared to prior automultiscopic displays.

357. Theory Unifying Ray George Barbastathis, Ramesh Raskar, Belen Masia, Se Baek Oh and Tom
and Wavefront Cuypers
Lightfield
This work focuses on bringing powerful concepts from wave optics to the creation of
Propagation new algorithms and applications for computer vision and graphics. Specifically,
ray-based, 4D lightfield representation, based on simple 3D geometric principles,
has led to a range of new applications that include digital refocusing, depth
estimation, synthetic aperture, and glare reduction within a camera or using an
array of cameras. The lightfield representation, however, is inadequate to describe
interactions with diffractive or phase-sensitive optical elements. Therefore we use
Fourier optics principles to represent wavefronts with additional phase information.
We introduce a key modification to the ray-based model to support modeling of
wave phenomena. The two key ideas are "negative radiance" and a "virtual light
projector." This involves exploiting higher dimensional representation of light
transport.

358. Trillion Frames Per Ramesh Raskar, Andreas Velten, Everett Lawson, Di Wu, and Moungi G.
Second Camera Bawendi

We have developed a camera system that captures movies at an effective rate of


NEW LISTING
approximately one trillion frames per second. In one frame of our movie, light moves
only about 0.6 mm. We can observe pulses of light as they propagate through a
scene. We use this information to understand how light propagation affects image
formation and to learn things about a scene that are invisible to a regular camera.

Page 80 October 2012 MIT Media Lab


359. Vision on Tap Ramesh Raskar

Computer vision is a class of technologies that lets computers use cameras to


automatically stitch together panoramas, reconstruct 3-D geometry from multiple
photographs, and even tell you when the water's boiling. For decades, this
technology has been advancing mostly within the confines of academic institutions
and research labs. Vision on Tap is our attempt to bring computer vision to the
masses.

Alumni Contributor: Kevin Chiu

360. VisionBlocks Abhijit Bendale, Kshitij Marwah and Jason R Boggess

VisionBlocks is an on-demand, in-browser, customizable, computer-vision


application-building platform for the masses. Even without any prior programming
experience, users can create and share computer vision applications. End-users
drag and drop computer vision processing blocks to create their apps. The input
feed could be either from a user's webcam or a video from the Internet.
VisionBlocks is a community effort where researchers obtain fast feedback,
developers monetize their vision applications, and consumers can use
state-of-the-art computer vision techniques. We envision a Vision-as-a-Service
(VaaS) over-the-web model, with easy-to-use interfaces for application creation for
everyone.

Alumni Contributor: Kevin Chiu

Mitchel Resnick—Lifelong Kindergarten


How to engage people in creative learning experiences.

361. App Inventor Hal Abelson, Eric Klopfer, Mitchel Resnick, Leo Burd, Andrew McKinney,
Shaileen Pokress, CSAIL and Scheller Teacher Education Program
NEW LISTING
The Center for Mobile Learning is driven by a vision that people should be able to
experience mobile technology as creators, not just consumers. One focus of our
activity here is App Inventor, a Web-based program development tool that even
beginners with no prior programming experience can use to create mobile
applications. Work on App Inventor was initiated in Google Research by Hal
Abelson and is continuing at the MIT Media Lab as a collaboration with the
Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Scheller
Teacher Education Program (STEP).

362. Collab Camp Ricarose Roque, Amos Blanton, Natalie Rusk and Mitchel Resnick

To foster and better understand collaboration in the Scratch Online Community, we


NEW LISTING
created Collab Camp, a month-long event in which Scratch community members
form teams (“collabs”) to work together on Scratch projects. Our goals include:
analyzing how different organizational structures support collaboration in different
ways; examining how design decisions influence the diversity of participation in
collaborative activities; and studying the role of constructive feedback in creative,
collaborative processes.

363. Computer Clubhouse Mitchel Resnick, Natalie Rusk, Chris Garrity, Claudia Urrea, and Robbie Berg

At Computer Clubhouse after-school centers, young people (ages 10-18) from


low-income communities learn to express themselves creatively with new
technologies. Clubhouse members work on projects based on their own interests,

MIT Media Lab October 2012 Page 81


with support from adult mentors. By creating their own animations, interactive
stories, music videos, and robotic constructions, Clubhouse members become more
capable, confident, and creative learners. The first Computer Clubhouse was
established in 1993, as a collaboration between the Lifelong Kindergarten group
and The Computer Museum (now part of the Boston Museum of Science). With
financial support from Intel Corporation, the network has expanded to more than 20
countries, serving more than 20,000 young people. The Lifelong Kindergarten group
continues to develop new technologies, introduce new educational approaches, and
lead professional-development workshops for Clubhouses around the world.

Alumni Contributors: Leo Burd, Robbin Chapman, Rachel Garber, Tim Gorton,
Michelle Hlubinka and Elisabeth Sylvan

364. Computer Clubhouse Chris Garrity, Natalie Rusk and Mitchel Resnick
Village
The Computer Clubhouse Village is an online community that connects people at
Computer Clubhouse after-school centers around the world. Through the Village,
Clubhouse members and staff (at more than 100 Clubhouses in 21 countries) can
share ideas with one another, get feedback and advice on their projects, and work
together on collaborative design activities.

Alumni Contributors: Robbin Chapman, Rachel Garber and Elisabeth Sylvan

365. Drawdio Jay Silver and Mitchel Resnick

Drawdio is a pencil that draws music. You can sketch musical instruments on paper
and play them with your finger. Touch your drawings to bring them to life—or
collaborate through skin-to-skin contact. Drawdio works by creating electrical
circuits with graphite and the human body.

366. Family Scratch Ricarose Roque and Mitchel Resnick


Nights
In Family Scratch Nights, we engage parents and their children in workshops to
design and invent together with Scratch, a programming language where people
NEW LISTING
can create their own interactive animations, games, and stories. Just as children's
literacy can be supported by parents reading with them, children's creativity can be
supported by parents creating with them. Children who learn to create with
technologies like Scratch often come from homes with strong support systems. In
these workshops, we especially target families with limited access to resources and
social support around technology. By promoting participation across generations,
these creative workshops engage parents to support their children in becoming
creators and full participants in today’s digital society.

367. Learning with Data Sayamindu Dasgupta and Mitchel Resnick

More and more computational activities revolve around collecting, accessing, and
NEW LISTING
manipulating large sets of data, but introductory approaches for learning
programming typically are centered around algorithmic concepts and flow of control,
not around data. Computational exploration of data, especially data-sets, has been
usually restricted to predefined operations in spreadsheet software like Microsoft
Excel. This project builds on the Scratch programming language and environment to
allow children to explore data and datasets. With the extensions provided by this
project, children can build Scratch programs to not only manipulate and analyze
data from online sources, but also to collect data through various means such as
surveys and crowd-sourcing. This toolkit will support many different types of
projects like online polls, turn-based multiplayer games, crowd-sourced stories,
visualizations, information widgets, and quiz-type games.

Page 82 October 2012 MIT Media Lab


368. MaKey MaKey Eric Rosenbaum, Jay Silver, and Mitchel Resnick

MaKey MaKey lets you transform everyday objects into computer interfaces. Make
NEW LISTING
a game pad out of Play-Doh, a musical instrument out of bananas, or any other
invention you can imagine. It's a little USB device you plug into your computer and
you use it to make your own switches that act like keys on the keyboard: Make +
Key = MaKey MaKey! It’s plug and play. No need for any electronics or
programming skills. Since MaKey MaKey looks to your computer like a regular
mouse and keyboard, it’s automatically compatible with any piece of software you
can think of. It’s great for beginners tinkering and exploring, for experts prototyping
and inventing, and for everybody who wants to playfully transform their world.

369. Map Scratch Sayamindu Dasgupta and Mitchel Resnick

Map Scratch is an extension of Scratch that enables kids to program with maps
NEW LISTING
within their Scratch projects. With Map Scratch, kids can create interactive tours,
games, and data visualizations with real-world geographical data and maps.

370. MelodyMorph Eric Rosenbaum and Mitchel Resnick

MelodyMorph is an interface for constructing melodies and making improvised


music. It removes a constraint of traditional musical instruments: a fixed mapping
between space and pitch. What if you blew up the piano so you could put the keys
anywhere you want? With MelodyMorph you can create a customized musical
instrument, unique to the piece of music, the player, or the moment.

371. Re·play Tiffany Tseng and Mitchel Resnick

Re•play is a self-documenting construction kit for children both to share their


NEW LISTING
designs with others and reflect on their own design process. Re•play consists of a
set of angular construction pieces that can sense their connection and orientation. A
virtual model is rendered in real time as a design is constructed, and an on-screen
playback interface allows users to view models from multiple perspectives and
watch how a design was assembled.

372. Scratch Mitchel Resnick, John Maloney, Natalie Rusk, Karen Brennan, Champika
Fernanda, Ricarose Roque, Sayamindu Dasgupta, Amos Blanton, Michelle
Chung, Abdulrahman idlbi, Eric Rosenbaum, Brian Silverman, Paula Bonta

Scratch is a programming language and online community (http://scratch.mit.edu)


that makes it easy to create your own interactive stories, games, animations, and
simulations—and share your creations online. As young people create and share
Scratch projects, they learn to think creatively, reason systematically, and work
collaborative, while also learning important mathematical and computational ideas.
Nearly 3 million projects have been shared on the Scratch website. We are currently
working on a next generation of Scratch, called Scratch 2.0, to be launched in early
2013.

Alumni Contributors: Gaia Carini, Margarita Dekoli, Evelyn Eastmond, Amon Millner,
Andres Monroy-Hernandez and Tamara Stern

MIT Media Lab October 2012 Page 83


373. Scratch Day Karen Brennan and Mitchel Resnick

Scratch Day is a network of face-to-face local gatherings, on the same day in all
parts of the world, where people can meet, share, and learn more about Scratch, a
programming environment that enables people to create their own interactive
stories, games, animations, and simulations. We believe that these types of
face-to-face interactions remain essential for ensuring the accessibility and
sustainability of initiatives such as Scratch. In-person interactions enable richer
forms of communication among individuals, more rapid iteration of ideas, and a
deeper sense of belonging and participation in a community. The first Scratch Day
took place on May 16, 2009, with 120 events in 44 different countries. The second
Scratch Day took place on May 22, 2010.

374. ScratchEd Karen Brennan, Michelle Chung, and Mitchel Resnick

As Scratch proliferates through the world, there is a growing need to support


learners. But for teachers, educators, and others who are primarily concerned with
enabling Scratch learning, there is a disconnect between their needs and the
resources that are presently available through the Scratch Web site. ScratchEd is
an online environment for Scratch educators to share stories, exchange resources,
ask questions, and find people.

375. ScratchJr Mitchel Resnick, Marina Bers, Paula Bonta, Brian Silverman and Sayamindu
Dasgupta
NEW LISTING
The ScratchJr project aims to bring the ideas and spirit of Scratch programming
activities to younger children, enabling children ages five to seven to program their
own interactive stories, games, and animation. To make ScratchJr developmentally
appropriate for younger children, we are revising the interface and providing new
structures to help young children learn core math concepts and problem-solving
strategies. We hope to make a version of ScratchJr publicly available in 2013.

376. Singing Fingers Eric Rosenbaum, Jay Silver and Mitchel Resnick

Singing Fingers allows children to fingerpaint with sound. Users paint by touching a
screen with a finger, but color only emerges if a sound is made at the same time. By
touching the painting again, users can play back the sound. This creates a new
level of accessibility for recording, playback, and remixing of sound.

Deb Roy—Cognitive Machines


How to build machines that learn to use language in human-like ways, and develop
tools and models to better understand how children learn to communicate and how
adults behave.

377. BlitzScribe: Speech Brandon Roy and Deb Roy


Analysis for the
BlitzScribe is a new approach to speech transcription driven by the demands of
Human Speechome
today's massive multimedia corpora. High-quality annotations are essential for
Project indexing and analyzing many multimedia datasets; in particular, our study of
language development for the Human Speechome Project depends on speech
transcripts. Unfortunately, automatic speech transcription is inadequate for many
natural speech recordings, and traditional approaches to manual transcription are
extremely labor intensive and expensive. BlitzScribe uses a semi-automatic
approach, combining human and machine effort to dramatically improve
transcription speed. Automatic methods identify and segment speech in dense,
multitrack audio recordings, allowing us to build streamlined user interfaces

Page 84 October 2012 MIT Media Lab


maximizing human productivity. The first version of BlitzScribe is already about 4-6
times faster than existing systems. We are exploring user-interface design,
machine-learning and pattern-recognition techniques to build a human-machine
collaborative system that will make massive transcription tasks feasible and
affordable.

378. Crowdsourcing the Jeff Orkin and Deb Roy


Creation of Smart
We are crowdsourcing the creation of socially rich interactive characters by
Role-Playing Agents
collecting data from thousands of people interacting and conversing in online
multiplayer games, and mining recorded gameplay to extract patterns in language
and behavior. The tools and algorithms we are developing allow non-experts to
automate characters who can play roles by interacting and conversing with humans
(via speech or typed text), and with each other. The Restaurant Game recorded
over 16,000 people playing the roles of customers and waitresses in a virtual
restaurant. Improviso is recording humans playing the roles of actors on the set of a
sci-fi movie. This approach will enable new forms of interaction for games, training
simulations, customer service, and HR job applicant screening systems.

379. HouseFly: Immersive Philip DeCamp, Rony Kubat and Deb Roy
Video Browsing and
HouseFly combines audio-video recordings from multiple cameras and
Data Visualization
microphones to generate an interactive, 3D reconstruction of recorded events.
Developed for use with the longitudinal recordings collected by the Human
Speechome Project, this software enables the user to move freely throughout a
virtual model of a home and to play back events at any time or speed. In addition to
audio and video, the project explores how different kinds of data may be visualized
in a virtual space, including speech transcripts, person tracking data, and retail
transactions.

Alumni Contributor: George Shaw

380. Human Speechome Philip DeCamp, Brandon Roy, Soroush Vosoughi and Deb Roy
Project
The Human Speechome Project is an effort to observe and computationally model
the longitudinal language development of a single child at an unprecedented scale.
To achieve this, we are recording, storing, visualizing, and analyzing communication
and behavior patterns in over 200,000 hours of home video and speech recordings.
The tools that are being developed for mining and learning from hundreds of
terabytes of multimedia data offer the potential for breaking open new business
opportunities for a broad range of industries—from security to Internet commerce.

Alumni Contributors: Michael Fleischman, Jethran Guinness, Alexia Salata and


George Shaw

381. Speech Interaction Brandon Roy and Deb Roy


Analysis for the
The Speechome Corpus is the largest corpus of a single child learning language in
Human Speechome
a naturalistic setting. We have now transcribed significant amounts of the speech to
Project support new kinds of language analysis. We are currently focusing on the child's
lexical development, pinpointing "word births" and relating them to caregiver
language use. Our initial results show child vocabulary growth at an unprecedented
temporal resolution, as well as a detailed picture of other measures of linguistic
development. The results suggest individual caregivers "tune" their spoken
interactions to the child's linguistic ability with far more precision than expected,
helping to scaffold language development. To perform these analyses, new tools
have been developed for interactive data annotation and exploration.

MIT Media Lab October 2012 Page 85


382. Speechome Recorder Soroush Vosoughi, Joe Wood, Matthew Goodwin and Deb Roy
for the Study of Child
Collection and analysis of dense, longitudinal observational data of child behavior in
Development
natural, ecologically valid, non-laboratory settings holds significant benefits for
Disorders advancing the understanding of autism and other developmental disorders. We
have developed the Speechome Recorder—a portable version of the embedded
audio/video recording technology originally developed for the Human Speechome
Project—to facilitate swift, cost-effective deployment in special-needs clinics and
homes. Recording child behavior daily in these settings will enable us to study
developmental trajectories of autistic children from infancy through early childhood,
as well as atypical dynamics of social interaction as they evolve on a day-to-day
basis. Its portability makes possible potentially large-scale comparative study of
developmental milestones in both neurotypical and autistic children. Data-analysis
tools developed in this research aim to reveal new insights toward early detection,
provide more accurate assessments of context-specific behaviors for individualized
treatment, and shed light on the enduring mysteries of autism.

Alumni Contributors: George Shaw and Philip DeCamp

Chris Schmandt—Speech + Mobility


How speech technologies and portable devices can enhance communication.

383. Back Talk Chris Schmandt and Andrea Colaco

The living room is the heart of social and communal interactions in a home. Often
present in this space is a screen: the television. When in use, this communal
gathering space brings together people and their interests, and their varying needs
for company, devices, and content. This project focuses on using personal devices
such as mobile phones with the television; the phone serves as a controller and
social interface by offering a channel to convey engagement, laughter, and viewer
comments, and to create remote co-presence.

384. Dotstorm Chris Schmandt and Charlie DeTar

The "Nominal Group Technique" is a popular way to brainstorm, often executed with
NEW LISTING
Post-it notes and voting stickers. We're reimagining and reimplementing this
technique for online use, for things such as hackathons, design workshops, and
brainstorms across multiple geographies. The best part: everyone can take the
results of the brainstorm with them, and embed it in blogs or websites.

385. Flickr This Chris Schmandt and Dori Lin

Inspired by the fact that people are communicating more and more through
technology, Flickr This explores ways for people to have emotion-rich conversations
through all kinds of media provided by people and technology—a way for
technology to allow remote people to have conversations more like face-to-face
experiences by grounding them in shared media. Flickr This lets viewable contents
provide structure for a conversation; with a grounding on the viewable contents,
conversation can move between synchronous and asynchronous, and evolve into a
richer collaborative conversation/media.

386. frontdesk Chris Schmandt and Andrea Colaco

Calling a person versus calling a place has quite distinctive affordances. With the
NEW LISTING
arrival of mobile phones, the concept of calling has moved from calling a place to
calling a person. Frontdesk proposes a place-based communication tool that is

Page 86 October 2012 MIT Media Lab


accessed primarily through any mobile device and features voice calls and text
chat. The application uses “place” loosely to define a physical space created by a
group of people that have a shared context of that place. Examples of places could
be different parts of a workspace in a physical building, such as the machine shop,
café, or Speech + Mobility group area at the Media Lab. When a user calls any of
these places, frontdesk routes their call to all people that are “checked-in” to the
place.

387. Going My Way Chris Schmandt and Jaewoo Chung

When friends give directions, they often don't describe the whole route, but instead
provide landmarks along the way which with they think we'll be familiar. Friends can
assume we have certain knowledge because they know our likes and dislikes.
Going My Way attempts to mimic a friend by learning about where you travel,
identifying the areas that are close to the desired destination from your frequent
path, and picking a set of landmarks to allow you to choose a familiar one. When
you select one of the provided landmarks, Going My Way will provide directions
from it to the destination.

Alumni Contributors: Chaochi Chang and Paulina Lisa Modlitba

388. Guiding Light Chris Schmandt and Jaewoo Chung

Guiding Light is a navigation-based application that provides directions by projecting


them onto physical spaces both indoors and outdoors. It enables a user to get
relevant spatial information by using a mini projector in a cell phone. The core
metaphor involved in this design is that of a flashlight which reveals objects in and
information about the space it illuminates. For indoor navigation, Guiding Light uses
a combination of e-compass, accelerometer, proximity sensors, and tags to place
information appropriately. In contrast to existing heads-up displays that push
information into the user's field of view, Guiding Light works on a pull principle,
relying entirely on users' requests and control of information.

389. Indoor Location Chris Schmandt, Jaewoo Chung, Nan-Wei Gong, Wu-Hsi Li and Joe Paradiso
Sensing Using
We present an indoor positioning system that measures location using disturbances
Geo-Magnetism
of the Earth's magnetic field by structural steel elements in a building. The presence
of these large steel members warps the geomagnetic field such that lines of
magnetic force are locally not parallel. We measure the divergence of the lines of
the magnetic force field using e-compass parts with slight physical offsets; these
measurements are used to create local position signatures for later comparison with
values in the same sensors at a location to be measured. We demonstrate accuracy
within one meter 88% of the time in experiments in two buildings and across
multiple floors within the buildings.

390. InterTwinkles Chris Schmandt and Charlie DeTar

Bringing deliberative process and consensus decision making to the 21st century! A
practical set of tools for assisting in meeting structure, deliberative process,
brainstorming, and negotiation. Helping groups to democratically engage with each
other, across geographies and time zones.

MIT Media Lab October 2012 Page 87


391. LocoRadio Chris Schmandt and Wu-Hsi Li

LocoRadio is a mobile, augmented-reality, audio browsing system that immerses


NEW LISTING
you within a soundscape as you move. To enhance the browsing experience in
high-density spatialized audio environments, we introduce a UI feature, "auditory
spatial scaling," which enables users to continuously adjust the spatial density of
perceived sounds. The audio will come from a custom, geo-tagged audio database.
The current demo uses iconic music to represent restaurants. As users move in the
city, they encounter a series of music and the perception enhances their awareness
of the numbers, styles, and locations of nearby restaurants.

392. Musicpainter Chris Schmandt, Barry Vercoe and Wu-Hsi Li

Musicpainter is a networked, graphical composing environment that encourages


sharing and collaboration within the composing process. It provides a social
environment where users can gather and learn from each other. The approach is
based on sharing and managing music creation in small and large scales. At the
small scale, users are encouraged to begin composing by conceiving small musical
ideas, such as melodic or rhythmic fragments, all of which are collected and made
available to all users as a shared composing resource. The collection provides a
dynamic source of composing material that is inspiring and reusable. At the large
scale, users can access full compositions that are shared as open projects. Users
can listen to and change any piece. The system generates an attribution list on the
edited piece, allowing users to trace how it evolves in the environment.

393. OnTheRun Chris Schmandt and Matthew Joseph Donahoe

OnTheRun is a location-based exercise game designed for the iPhone. The player
assumes the role of a fugitive trying to gather clues to clear his name. The game is
played outdoors while running, and the game creates missions that are tailored to
the player's neighborhood and running ability. The game is primarily an audio
experience, and gameplay involves following turn-by-turn directions, outrunning
virtual enemies, and reaching destinations.

394. Puzzlaef Chris Schmandt, Sinchan Banerjee, and Drew Harry

How can one understand and visualize the lifestyle of a person on the other side of
NEW LISTING
the world? Puzzlaef attempts to tackle this question through a mobile picture puzzle
game where users collaboratively solve with pictures from their lifestyles.

395. Radio-ish Media Chris Schmandt, Barry Vercoe and Wu-Hsi Li


Player
How many decisions does it take before you hear a desired piece of music on your
iPod? First, you are asked to pick a genre, then an artist, then an album, and finally
a song. The more songs you own, the tougher the choices are. To resolve the
issues, we turn the modern music player into an old analog radio tuner, the
Radio-ish Media Player. No LCDs, no favorite channels, all you have is a knob that
will help you surf through channel after channel accompanied by synthesized noise.
Radio-ish is our attempt to revive the lost art of channel surfing in the old analog
radio tuner. Let music find you: your ears will tell you if the music is right. This
project is not only a retrospective design, but also our reflection on lost simplicity in
the process of digitalization. A mobile phone version is also available for demo.

396. ROAR Chris Schmandt and Drew Harry

The experience of being in a crowd is visceral. We feel a sense of connection and


NEW LISTING
belonging through shared experiences like watching a sporting event, speech, or
performance. In online environments, though, we are often part of a crowd without
feeling it. ROAR is designed to allow very large groups of distributed spectators
have meaningful conversations with strangers or friends while creating a sense of
presence of thousands of other spectators. ROAR is also interested in creating

Page 88 October 2012 MIT Media Lab


opportunities for collective action among spectators and providing flexible ways to
share content among very large groups. These systems combine to let you feel the
roar of the crowd even if you're alone in your bedroom.

397. SeeIt-ShareIt Chris Schmandt, Andrea Colaco

Now that mobile phones are starting to have 3D display and capture capabilities,
NEW LISTING
there are opportunities to enable new applications that enhance person-person
communication or person-object interaction. This project explores one such
application: acquiring 3D models of objects using cell phones with stereo cameras.
Such models could serve as shared objects that ground communication in virtual
environments and mirrored worlds or in mobile augmented reality applications.

398. Spellbound Misha Sra and Chris Schmandt

Turning screen time into activity time, Spellbound is a cooperatively competitive


NEW LISTING
real-time real world multiplayer mobile game. It uses a fantasy game context to
connect and bring people together and encourage new kinds of activities in existing
physical spaces, as well as to encourage collaborative and strategic thinking.

399. Spotz Chris Schmandt and Misha Sra

Exploring your city is a great way to make friends, discover new places, find new
NEW LISTING
interests, and invent yourself. Tagzz is an Android app where everyone collectively
defines the places they visit and the places in turn define them. Tagzz allows you to
discover yourself by discovering places. You tag a spot, create some buzz for it
and, if everyone agrees the spot is 'fun' this bolsters your 'fun' quotient. If everyone
agrees the spot is 'geeky' it pushes up your ‘geeky’ score. Thus emerges your
personal tag cloud. Follow tags to chance upon new places. Find people with similar
'tag clouds' as your own and experience new places together. Create buzz for your
favorite spots and track other buzz to find who has the #bestchocolatecake in town!

400. Tin Can Chris Schmandt, Matthew Donahoe and Drew Harry

Distributed meetings present a set of interesting challenges to staying engaged and


involved. Because one person speaks at a time, it is easy (particularly for remote
participants) to disengage from the meeting undetected. However, non-speaking
roles in a meeting can be just as important as speaking ones, and if we could give
non-speaking participants ways to participate, we could help support better-run
meetings of all kinds. Tin Can collects background tasks like taking notes,
managing the agenda, sharing relevant content, and tracking to-dos in a distributed
interface that uses meeting participants' phones and laptops as input devices, and
represents current meeting activities on an iPad in the center of the table in each
meeting location. By publicly representing these background processes, we provide
meeting attendees with new ways to participate and be recognized for their
non-verbal participation.

401. Tin Can Classroom Chris Schmandt, Drew Harry and Eric Gordon (Emerson College)

Classroom discussions may not seem like an environment that needs a new kind of
supporting technology. But we've found that augmenting classroom discussions
with an iPad-based environment to help promote discussion, keep track of current
and future discussion topics, and create a shared record of class keeps students
engaged and involved with discussion topics, and helps restart the discussion when
conversation lags. Contrary to what you might expect, having another discussion
venue doesn't seem to add to student distraction; rather it tends to focus distracted
students on this backchannel discussion. For the instructor, our system offers
powerful insights into the engagement and interests of students who tend to speak
less in class, which in turn can empower less-active students to contribute in a
venue in which they feel more comfortable.

MIT Media Lab October 2012 Page 89


Ethan Zuckerman—Civic Media
How to create technical and social systems to allow communities to share, understand,
and act on civic information.

402. Between the Bars Charlie DeTar

Between the Bars is a blogging platform for one out of every 142
NEW LISTING
Americans—prisoners—that makes it easy to blog using standard postal mail. It
consists of software tools to make it easy to upload PDF scans of letters,
crowd-sourced transcriptions of the scanned images. Between the Bars includes the
usual full-featured blogging tools including comments, tagging, RSS feeds, and
notifications for friends and family when new posts are available.

403. Codesign Toolkit Sasha Costanza-Chock, Molly Sauter and Becky Hurwitz

Involving communities in the design process results in products more responsive to


NEW LISTING
a community's needs, more suited to accessibility and usability concerns, and
easier to adopt. Civic media tools, platforms, and research work best when
practitioners involve target communities at all stages of the process–iterative
ideation, prototyping, testing, and evaluation. In the codesign process, communities
act as codesigners and participants, rather than mere consumers, end-users, test
subjects, or objects of study. In the Codesign Studio, students practice these
methods in a service learning project-based studio, focusing on collaborative design
of civic media with local partners. The Toolkit will enable more designers and
researchers to utilize the co-design process in their work by presenting current
theory and practices in a comprehensive, accessible manner.

404. Controversy Mapper Ethan Zuckerman, Rahul Bhargava, Erhardt Graeff and Matt Stempeck

How does a media controversy become the only thing any of us are talking about?
NEW LISTING
Using the Media Cloud platform, we're reverse-engineering major news stories to
visualize how ideas spread, how media frames change over time, and whose voices
dominate a discussion. We've started with a case study of Trayvon Martin, a
teenager who was shot and killed. His story became major national news...weeks
after his death. Analysis of stories like Trayvon's provide a revealing portrait of our
complicated media ecosystem.

405. Data Therapy Rahul Bhargava

We are actively engaging with community coalitions in order to build their capacity
NEW LISTING
to do their own data visualization and presentation. New computer-based tools are
lowering the barriers of entry for making engaging and creative presentations of
data. Rather than encouraging partnerships with epidemiologists, statisticians, or
programmers, we see an opportunity to build capacity within small community
organizations by using these new tools.

406. Grassroots Mobile Joe Paradiso, Ethan Zuckerman, Pragun Goyal and Nathan Matias
Infrastructure
We want to help people in nations where electric power is scarce sell power to their
neighbors. We’re designing a piece of prototype hardware that plugs into a diesel
NEW LISTING
generator or other power source, distributes the power to multiple outlets, monitors
how much power is used, and uses mobile payments to charge the customer for the
power consumed.

Page 90 October 2012 MIT Media Lab


407. LazyTruth Ethan Zuckerman, Matt Stempeck, David Kim, Evan Moore, Justin Nowell and
Tess Wise
NEW LISTING
Have you ever been forwarded an email that you just can’t believe? Our inboxes are
rife with misinformation. The truth is out there, just not when we actually need it.
LazyTruth is a Gmail gadget that surfaces verified truths when you receive common
chain emails. It all happens right in your inbox, without requiring you to search
anywhere. The result is that it becomes much more convenient for citizens to
combat misinformation, rather than acquiesce to its volume. Whether it’s political
rumors, gift card scams, or phishing attempts, fact is now as convenient as fiction.

408. Mapping Banned Ethan Zuckerman, American Library Association, Chris Peterson and National
Books Coalition Against Censorship

Books are challenged and banned in public schools and libraries across the
NEW LISTING
country. But which books, where, by whom, and for what reasons? The Mapping
Banned Books project is a partnership between the Center for Civic Media, the
American Library Association, and the National Coalition Against Censorship to a)
visualize existing data on book challenges, b) detect what the existing data doesn't
capture, and c) devise new methods to surface suppressed speech.

409. Mapping the Globe Catherine D'Ignazio and Ethan Zuckerman

Mapping the Globe is a set of interactive visualizations and maps that help us
NEW LISTING
understand where the Boston Globe directs its attention. Media attention matters –
in quantity and quality. It helps determine what we talk about as a public and how
we talk about it. Mapping the Globe tracks where the paper's attention goes and
what that attention looks like across different regional geographies in combination
with diverse data sets like population, crime and income.

410. Media Cloud Hal Roberts, Ethan Zuckerman and David LaRochelle

Media Cloud is a platform for studying media ecosystems—the relationships


NEW LISTING
between professional and citizen media, between online and offline sources. By
tracking millions of stories published online or broadcast via television, the system
allows researchers to track the spread of memes, media framings and the tone of
coverage of different stories. The platform is open source and open data, designed
to be a substrate for a wide range of communications research efforts. Media Cloud
is a collaboration between Civic Media and the Berkman Center for Internet and
Society at Harvard Law School.

411. Media Meter Ethan Zuckerman, Nathan Matias, Matt Stempeck, Rahul Bhargava and Dan
Schultz
NEW LISTING
What have you seen in the news this week? And what did you miss? Are you
getting the blend of local, international, political, and sports stories you desire We’re
building a media-tracking platform to empower you, the individual, and news
providers themselves, to see what you’re getting and what you’re missing in your
daily consumption and production of media. The first round of modules developed
for the platform allow you to compare the breakdown of news topics and byline
gender across multiple news sources.

412. New Day New Abdulai Bah, Anjum Asharia, Sasha Costanza-Chock, Rahul Bhargava, Leo
Standard Burd, Rebecca Hurwitz, Marisa Jahn and Rodrigo Davies

New Day New Standard is an interactive hotline that informs nannies,


NEW LISTING
housekeepers, eldercare-givers, and their employers about the landmark Domestic
Workers' Bill of Rights, passed in New York State in November 2010. Operating in
English and Spanish, it's a hybrid application that combines regular touchtone
phones and Internet-based telephony within an open source framework. The Center
for Civic Media and REV are currently developing DISPTACHO, a generalized

MIT Media Lab October 2012 Page 91


version of the platform and associated GUI to allow other groups to create
interactive hotlines for a wide range of use cases. NDNS was presented to the
White House's Open Government Initative.

413. NewsJack Sasha Costanza-Chock, Henry Holtzman, Ethan Zuckerman and Daniel E.
Schultz
NEW LISTING
NewsJack is a media remixing tool built from Mozilla's Hackasaurus. It allows users
to modify the front pages of news sites, changing language and headlines to
change the news into what they wish it could be.

414. NGO 2.0 Jing Wang, Rongting Zhou, Endy Xie, Shi Song

NGO2.0 is a project grown out of the work of MIT’s New Media Action Lab. The
NEW LISTING
project recognizes that digital media and Web 2.0 are vital to grassroots NGOs in
China. NGOs in China operate under enormous constraints because of their
semi-legal status. Grassroots NGOs cannot compete with governmental affiliated
NGOs for the attention of mainstream media, which leads to difficulties in acquiring
resources and raising awareness of the cause they are promoting. The NGO2.0
Project serves grassroots NGOs in the underdeveloped regions of China, training
them to enhance their digital and social media literacy through Web 2.0 workshops.
The project also rolls out a crowd map to enable the NGO sector and the Corporate
Social Responsibility sector to find out what each sector has accomplished in
producing social good.

415. PageOneX Ethan Zuckerman and Pablo Rey Mazon

PageOneX is a tool to visualize the evolution of stories on newspaper front pages.


NEW LISTING
Newspaper front pages are a key source of data about our media ecology.
Newsrooms spend massive time and effort deciding what stories make it to the front
page. PageOneX makes coding and visualizing newspaper front page content much
easier, democratizing access to newspaper attention data. Communication
researchers have analyzed newspaper front pages for decades, using slow,
laborious methods. PageOneX simplifies, digitizes, and distributes the process
across the net and makes it available for researchers, citizens and activists.

416. Social Mirror Ethan Zuckerman, Nathan Matias, Gaia Marcus and Royal Society of Arts

Social Mirror transforms social science research by making offline social network
NEW LISTING
research cheaper, faster, and more reliable. Research on whole life networks
typically involves costly paper forms which take months to process. Social Mirror’s
digital process respects participant privacy while also putting social network
analysis within reach of community research and public service evaluation. By
providing instant feedback to participants, Social Mirror can also invite people to
consider and change their connection to their communities. Our pilot studies have
already shown the benefits for people facing social isolation.

417. T.I.C.K.L.E. Ethan Zuckerman, Nathan Matias and Eric Rosenbaum

The Toy Interface Construction Kit Learning Environment (T.I.C.K.L.E.) is a


NEW LISTING
universal construction kit for the rest of us. It doesn't require 3D printers or CAD
skills. Instead, it's a DIY social process for creating construction interoperability.

Page 92 October 2012 MIT Media Lab


418. VoIP Drupal Leo Burd

VoIP Drupal is an innovative framework that brings the power of voice and
Internet-telephony to Drupal sites. It can be used to build hybrid applications that
combine regular touchtone phones, web, SMS, Twitter, IM and other
communication tools in a variety of ways, facilitating community outreach and
providing an online presence to those who are illiterate or do not have regular
access to computers. VoIP Drupal will change the way you interact with Drupal,
your phone and the web.

419. Vojo.co Ethan Zuckerman, Sasha Costanza-Chock, Rahul Bhargava, Ed Platt, Becky
Hurwitz, Rodrigo Davies, Alex Goncalves, Denise Cheng and Rogelio Lopez
NEW LISTING
Vojo.co is a hosted mobile blogging platform that makes it easy for people to share
content to the web from mobile phones via voice calls, SMS, or MMS. Our goal is to
make it easier for people in low-income communities to participate in the digital
public sphere. You don't need a smart phone or an app to post blog entries or digital
stories to Vojo - any phone will do. You don't even need internet access: Vojo lets
you create an account via sms and start posting right away. Vojo is powered by the
VozMob Drupal Distribution, a customized version of the popular free and open
source content management system that is being developed through an ongoing
codesign process by day laborers, household workers, and a diverse team from the
Institute of Popular Education of Southern California (IDEPSCA).

420. VozMob Sasha Costanza-Chock

The VozMob Drupal Distribution is Drupal customized as a mobile blogging


NEW LISTING
platform. VozMob has been designed to make it easy to post content to the web
from mobile phones via voice calls, SMS, or MMS. You don't need a smart phone or
an app to post blog entries - any phone will do. VozMob allows civic journalists in
low-income communities to participate in the digital public sphere. Features include
groups, tags, geocoding and maps, MMS filters, and new user registration via SMS.
Site editors can send multimedia content out to registered users' mobile phones.
VozMob Drupal Distribution is developed through an ongoing codesign process by
day laborers, household workers, and students from the Institute of Popular
Education of Southern California (IDEPSCA.org). The project received early support
from the Annenberg School for Communication and Journalism at the University of
Southern California, Macarthur/HASTAC, Nokia, and others.

421. What's Up Leo Burd

What's Up is a set of tools designed to allow people in a small geographic


NEW LISTING
community to share information, plan events and make decisions, using media
that's as broadly inclusive as possible. The platform incorporates low cost LED
signs, online and paper event calendars and a simple, yet powerful, phone system
that is usable with the lowest-end mobile and touch tone phones.

422. Whose Voices? Ethan Zuckerman and Nathan Matias


Twitter Citation in the
Mainstream media increasingly quote social media sources for breaking news.
Media
"Whose Voices" tracks who's getting quoted across topics, showing just how citizen
media sources are influencing international news reporting.
NEW LISTING

MIT Media Lab October 2012 Page 93

You might also like