Building at the intersection of spatial intelligence, neurosymbolic reasoning, and scientific computing.
I am a 21-year-old researcher working across deep learning, scientific machine learning, and 3D spatial reasoning. My work is driven by a conviction that the next generation of intelligent systems must unify geometric perception, symbolic abstraction, and physically grounded world models — bridging the gap between pattern recognition and genuine understanding of spatiotemporal structure.
I am an active open-source contributor in the Julia and Python scientific computing ecosystems, with sustained contributions to the JuliaHealth and SciML organisations.
My core research focus lies in real-time 3D video understanding and spatial intelligence. I work extensively with monocular depth estimation, Gaussian splatting, and neural radiance fields — methods that reconstruct and reason about 3D scenes from 2D observations. My interest extends beyond static reconstruction to the temporal dimension: building spatiotemporal scene representations that capture how environments evolve over time, enabling embodied agents to perceive, predict, and act within dynamic physical worlds.
This line of work connects directly to the emerging field of world models — learned simulators of environment dynamics that allow agents to plan and imagine future states before committing to action. I am particularly interested in architectures that learn structured latent representations of physical dynamics, combining the expressivity of deep generative models with the compositionality of symbolic physics priors.
I am deeply invested in the design of novel symbolic-hierarchical reinforcement learning architectures. The central question motivating this work is: how can we build agents that discover and exploit compositional structure in their environments, rather than learning flat, monolithic policies?
My approach draws from neurosymbolic AI — combining neural perception and continuous control with discrete symbolic program induction. This enables agents to learn abstract, transferable skills organized into task hierarchies, where high-level symbolic planners delegate to learned low-level controllers. The goal is not merely sample efficiency, but genuine compositional generalisation: agents that can recombine previously learned concepts to solve novel tasks zero-shot.
I have made sustained contributions to the Julia scientific computing ecosystem, particularly within the JuliaHealth and SciML organisations. My work in this space spans medical image processing, volumetric segmentation, and differentiable scientific computing.
I developed MedImages.jl, a library for BIDS-format medical image loading with ITK wrapper functions, supporting image transformations and format conversion pipelines critical for reproducible medical imaging research. I also authored SupervoxelSegmentation, a native Julia implementation of supervoxel segmentation algorithms for volumetric data — a foundational building block for 3D medical image analysis.
Beyond medical imaging, my scientific ML work includes contributions to data interpolation methods, SciML benchmarks, and operator-based scientific computing frameworks.
In applied machine learning, I built Leaf-Effects, a hierarchical vision transformer for detecting leaf diseases, incorporating patch embeddings, hierarchical feature stages, and feature pyramid networks for multi-scale detection. I also developed a complete ML pipeline for predicting water quality indicators from Landsat satellite imagery and TerraClimate data for the EY Open Science Data Challenge.
My work on cryoet-tomogram-search-agent demonstrates my interest in AI-augmented scientific workflows — building intelligent agents that navigate CZI's CryoET datasets and retrieve relevant tomograms from natural language queries, bridging the gap between researchers and complex scientific data repositories.
I maintain several open-source tools that reflect my broader engineering interests. aibase-scraper is a Rust-powered web scraper with a React frontend for AI research news aggregation. PosterGuild is a research-paper-to-poster generation application. Julez-Dash demonstrates data-intensive web applications built on the Dash framework in Julia.
Primary — Julia, C++, Python Systems & HPC — CUDA, distributed computing, GPU kernel development Reference Languages — Fortran, Haskell, Zig, Rust, Mojo, Chapel, Lean, Scala, Lua, Elm
MedImages.jl (fork — active contributor): Library for loading data based on the BIDS format with Insight Toolkit (ITK) wrapper functions. Provides functionality for performing various transformations of the original loaded images and exporting them in a desired format.
JuliaSupervoxelSegmentation: _A supervoxel segmentation algorithm developed and written in native Julia. _
JuliaMedResearch: A research document exploring the medical imaging field and the potential available data formats out there. (1 star)
Leaf-Effects: _Detecting Leaf disease using a hierarchical vision transformer. Patch Embeddings -> Hierarchical Stages -> Feature Pyramid -> Efficiency Optimizations _
PythonEY-Water-Quality-Prediction: ML pipeline for predicting water quality indicators (TA, EC, DRP) using Landsat satellite imagery and TerraClimate data — EY Open Science Data Challenge
Jupyter Notebookcryoet-tomogram-search-agent: Ai agents that navigate to the CryoET datasets url by czi and search for the relevant tomograms based on user query.
Python
PosterGuild: A legendary poster research paper to poster generation application
Pythonaibase-scraper: Rust web scraper for AIBase news articles with React frontend
TypeScript(2 stars)Julez-Dash: Dash is a web framework for building data intensive applications offered by the renowned "plotly" organisation under open source MIT License. This repo contains apps built on top of dash in Julia.
Julia(2 stars)Supermemory-Bookmarks-chat: Greps bookmarks and other saved items from my socials for faster context search ( X and Linkedin)
TypeScriptvideo-collage-cli: Terminal utility for downloading videos and creating video collages
TypeScript
This README is partially auto-generated. The project listing below is built from data/projects.csv by a Python script that fetches live metadata from the GitHub API. A GitHub Actions workflow regenerates it on every push.



