Inspiration
Reasonance was inspired by Saga Anderson’s Mind Place investigation board in Alan Wake 2. Watching clues, evidence, and theories get arranged across a physical board over time revealed something important: real reasoning is not a single question and answer — it is iterative, visual, and accumulative.
Most AI tools today still behave like chat interfaces. But humans don’t reason in chats — we reason by forming hypotheses, testing them against evidence, revisiting earlier ideas, and refining our understanding across time. Reasonance was born from the idea of turning that detective board into a living, multi-turn reasoning system powered by Gemini.
What it does
Reasonance is a multi-turn abductive reasoning engine where Gemini conducts investigations across structured turns instead of single prompts.
Users upload source material (articles, case files, paradoxes, research papers), and the system:
- Generates initial hypotheses and evidence (Turn 1)
- Allows users to select a hypothesis and link evidence
- Uses Gemini to generate “What If” explorations, new conclusions, and cross-references
- Builds a persistent reasoning graph across turns
- Synthesizes the entire investigation into a structured case file
Instead of chat history, Reasonance creates a detective board of evolving thought.
How we built it
Reasonance is a full-stack application:
- Spring Boot + PostgreSQL store investigations as graph structures (nodes and edges)
- Angular + TailwindCSS render a turn-based investigation board inspired by Alan Wake 2
- Gemini 3 acts as the reasoning engine
- Ollama (gemma3n) was used for local experimentation and prompt iteration
The core architecture is a turn loop:
- Gemini generates hypotheses and evidence from the source
- User selects a hypothesis and links evidence
- Backend reconstructs the full reasoning graph and sends it to Gemini
- Gemini produces new hypotheses that explicitly reference earlier turns
- This repeats across multiple turns (limited to 5 for MVP) before a final synthesis
This transforms Gemini from a responder into an investigator.
Challenges we ran into
- Getting Gemini to maintain continuity across turns without drifting
- Designing strict JSON schemas so reasoning could be parsed and stored as a graph
- Preventing the system from becoming just a “prompt wrapper”
- Building a UI that feels like a detective board, not a dashboard
- Forcing explicit cross-referencing between turns
- Balancing user-guided exploration with AI-generated reasoning
- Tried building similar investigative board as in the game, but it was not suitable for the reasoning engine developed
Accomplishments that we're proud of
- Turning Gemini into a multi-turn reasoning partner, not a chatbot
- Building a visual reasoning board that changes how users think
- Creating a persistent hypothesis graph across investigations
- Achieving structured abductive reasoning with explicit lineage
- Using various agents to develop the application. Not just using claude code or codex. Using bunch of the agents orchestrating at a time to build this.
What we learned
- LLMs are powerful at hypothesis generation but require structure for continuity
- True reasoning needs state, memory, and architecture, not just prompts
- Visualizing reasoning changes how users interact with AI
- Gemini’s long context window enables reasoning lineage across turns
- Prompt engineering alone is not enough — reasoning systems need design
What's next for Reasonance
- Support for PDFs, audio, and video as investigation inputs
- Automatic visualization of the reasoning graph
- Collaborative investigations across multiple users
- Exportable investigation reports
- Deeper integration with Gemini’s multimodal and long-context capabilities
Log in or sign up for Devpost to join the conversation.