Personal Project · 2025 - 2026
Navorina — Designing an AI system for financial reasoning, not reactions
Navorina is a personal project exploring how AI can support financial reasoning under uncertainty, rather than automate decisions or generate reactive advice.
When I started working on Navorina, I wasn’t trying to design another finance app. I was trying to understand why people struggle to make confident financial decisions, even when they have access to data, calculators, and tools.
The Problem
Most financial tools optimize for speed, automation, and surface-level insights. In practice, this fragments thinking.
Users jump between spreadsheets, dashboards, calculators, and notes — trying to piece together an understanding of their financial situation. Decisions become reactive, short-term, and difficult to explain, even to oneself.
The core problem wasn’t lack of information. It was the absence of clarity, continuity, and reasoning.
The Core Idea
I approached Navorina as a response to that gap. Instead of tracking transactions or predicting outcomes, Navorina focuses on financial snapshots — captured as states in time.
Each snapshot represents income structure, expense structure, assumptions, and context. These snapshots can be compared, analyzed, and "revisited over time".
The goal is not to tell users what to do, but to help them understand "why their financial situation looks the way it does".
Design Philosophy
From the beginning, my role went beyond interface design. I acted as a "product architect", defining how the system should reason, what it is allowed to assume, and what must remain "explicit to the user".
Does this help the user reason better — or does it just make the system look smarter?
This principle shaped every layer of the product and prioritized reasoning over reactions.
Key Decisions
1. No "instant answers". Navorina deliberately avoids quick conclusions or flashy signals. Every verdict is grounded in "visible logic and assumptions". This slows down interaction — but builds trust. In a financial context, "trust over speed" matters.
2. Snapshots over streams. Instead of real-time dashboards or transaction feeds, the system centers around snapshots tied to time and context. This lets users understand what changed, compare states across months or scenarios, and "trace decisions back to their assumptions".
3. AI as "reasoning support", not a chatbot. AI operates behind the scenes — analyzing relationships, explaining outcomes, and highlighting what matters. The assistant does not replace judgment; it preserves context and supports thinking. "Clarity over automation" is the default.
Interaction Model
The interaction model is intentionally "calm". There are no alerts, no real-time distractions, and no pressure to act now. Users move between snapshots, explore differences, and understand decisions "in hindsight and over time".
System architecture (design perspective). I treated Navorina as "system design", not features. Reasoning is separated from presentation, context is preserved across interactions, and memory is "explicit rather than hidden".
Trade-offs and outcomes. This approach meant slower interaction, fewer impressive first-glance features, and "higher cognitive transparency". In return, the system reduced cognitive load, minimized context switching, and supported "long-term thinking". That outcome reinforced the idea that "clarity scales better than complexity".
What This Project Represents
For me, Navorina represents more than a single case study. It reflects how I approach complex, AI-driven systems, ambiguous problem spaces, and long-term decision support.
I don’t optimize for visual novelty or automation in isolation. I design systems that help people think more clearly over time.
Current State & Ongoing Development
Navorina is intentionally not finished. It exists as a working product, an ongoing research space, and a testbed for AI-assisted reasoning.
Each iteration reinforces the same principle: the system must remain understandable, auditable, and aligned with real human thinking.