Hamza El Alaoui

/'hæmzə ɛl 'ælaʊi/ •

profile_light.jpeg
profile_dark.jpeg
Random image 1: pic_w_glasses
Random image 2: proimg
Random image 3: randomcat1
Random image 4: randomcat3
Random image 5: randomcat4
Random image 6: upsidedown
Random image 7: pic_w_glasses-480
Random image 8: pic_w_glasses-800
Random image 9: pic_w_glasses-1400
Random image 10: proimg-480
Random image 11: proimg-800
Random image 12: proimg-1400
Random image 13: randomcat1-480
Random image 14: randomcat1-800
Random image 15: randomcat1-1400
Random image 16: randomcat3-480
Random image 17: randomcat3-800
Random image 18: randomcat3-1400
Random image 19: randomcat4-480
Random image 20: randomcat4-800
Random image 21: randomcat4-1400
Random image 22: upsidedown-480
Random image 23: upsidedown-800
Random image 24: upsidedown-1400

I’m a Ph.D. student in the Human-Computer Interaction Institute at Carnegie Mellon’s School of Computer Science, where I’m fortunately advised by Jeffrey P. Bigham.

My work bridges machine learning, robotics, and human–computer interaction to enable multimodal human–agent collaboration for human augmentation.

I focus on two complementary directions:

  1. From Human to Agent. I build context-aware systems that ingest and structure multimodal signals—such as speech, vision, and temporal data—to continually refine digital and embodied agents, align them with user intent, and coordinate purposeful actions.
  2. From Agent to Human. I develop adaptive collaborators that translate perceptual and reasoning outputs into intuitive support, enhancing human cognitive, physical, and sensory capabilities.

Previously, I was a Machine Learning Engineer at Oracle, and also spent time at Mastercard and Prodware. I received my B.S. in Computer Science working with Violetta L. Cavalli-Sforza.

In my spare time, I enjoy exploring nature, biking, cooking, and interior design. In a past life, I competed in esports professionally and won several tournaments across MENA.