NimaAI started as a question I couldn't stop asking: what would it look like to build an AI tool that was genuinely useful to a single person, rather than marginally useful to everyone? The thesis was simple — the current generation of AI products optimises for breadth. We wanted to optimise for depth.
That sounds obvious when you say it. But it leads you to a very different set of product decisions, and a very different relationship with the people you're building for.
"Most AI tools are built to impress you in a demo. We wanted to build something that gets better the more you use it."
What NimaAI actually does
I'm not going to give you the full product breakdown here — we're still iterating on the core experience and I don't want to over-index on a description that might be outdated in three weeks. But the core idea is this: NimaAI is a personal AI layer that learns your context over time. Not just what you've told it, but how you think, what you care about, and what kinds of output are actually useful to you.
The difference between a generic AI assistant and what we're building is the difference between a search engine and a colleague who has worked alongside you for two years. The first gives you results. The second gives you answers.
Why this is technically hard
The naive version of "personalised AI" is just retrieval — store a bunch of user data and inject it into the context window. We tried that. It doesn't work well enough. The model doesn't know what to do with the data, and the user ends up with something that feels like it's reading from notes rather than actually understanding them.
The real challenge is building a representation of a person that the model can reason about — not just retrieve from. That requires a different approach to how you store information, how you update it, and how you prompt the model to use it.
We've made meaningful progress on this. It's still early, and there are unsolved problems, but I genuinely believe we're working on one of the more interesting problems in applied AI right now.
What I've learned so far
Building NimaAI has taught me that the hardest part of AI product development isn't the model — it's the product surface. How do you give someone control over their AI without overwhelming them? How do you surface personalisation without making it feel creepy? How do you build trust in a system that's making inferences about you?
These are UX problems as much as they are technical problems. And they're genuinely hard. I think the teams that solve them well are going to build something lasting.
More updates to come. We're building in the open as much as we can, and I'll keep sharing what I'm learning here.