Inspiration
At the age of nine years old, I taught myself to code. My school’s librarian introduced me to Scratch, an open-source platform that inevitably resulted in my journey in computer science. While platforms like Neetcode, Scratch, and Khan Academy teach the fundamentals of computer science, they don’t cover the details necessary to understand the workings of machine learning algorithms. We remedied this.
What it does
Burn allows users to build, train, test, and deploy custom machine learning models regardless of their level of experience. With tooling intuitive to both beginners and experienced engineers, anyone can use Burn to create custom (convolutional) neural networks, transformer-based models, or regression & classification models. Burn simplifies the model development process using a 3-dimensional sandbox experience, allowing users to physically engage with the systems they design. With support for both local (Mac, PC, DGX Spark, Jetson Nano, etc) and cloud (Modal) services, we enable any user to work in machine learning. Additionally, we allow the user to immerse themself into the learning process using a Cosmos-based (Vision Language Model) Tutor to explain exactly what’s going on in the development process.
How we built it
Connecting our Python (PyTorch, Scikit-learn, etc) backend with our React (TypeScript) frontend was Fast API. We served endpoints for the two to communicate, creating a seamless machine learning research & engineering experience with production-grade UI (inspired by Unity!).
Challenges we ran into
One major issue we encountered was leveraging Edge AI. After deploying our workloads to the Jetson Orin Nano, we found that even with its integrated GPU, it did not perform as well as running the workloads on Apple Silicon. This resulted in significant time loss, and although we completed the workflow for the Jetson, it was never deployed to production. Instead, we focused our efforts on the DGX Spark, which is compatible with all the models on Burn.
Accomplishments that we're proud of
Initially, we planned to develop a completely different project. At first, our interested was in distributed inferring. However, we realized pretty early into the hackathon the underlying flaw in network-distributed learning as a whole and quickly pivoted to Burn. Our quick thinking allowed us to have enough time to still create an end-to-end, Tree Hacks-ready submission.
What we learned
Since our group consists of members with vastly different levels of experience, each of us learned something different from this project. One member, who is relatively new to working in a team-based software environment, learned the importance of communication and proper version control. Two members, who have more experience in traditional software engineering, primarily learned about machine learning, how it is developed, and the technology behind it. The final member, a machine learning engineer, gained a deeper understanding of the connection between front-end and back-end systems, as well as Three.js rendering.
What's next for Burn
We hope to continue Burn as a long-term open-source project to enable future generations to learn about artificial intelligence and machine learning, similar to platforms such as Khan Academy and Scratch.
Authors
And a big shoutout to the organizers of Tree Hacks for having us for this great experience!
Built With
- aiml
- datascience
- fastapi
- html/css
- python
- pytorch
- react
- typescript

Log in or sign up for Devpost to join the conversation.