AI isn’t just another roadmap item. It fundamentally changes how products get built and the faster teams accept that, the better their chances of building something that actually works.
But to be clear upfront: this isn’t the first time I’ve seen this happen.
After building startups for the last 10 years, one pattern shows up again and again:
any time you’re working in a space with a high level of unknowns, the traditional product playbook starts to break down.
AI just makes that impossible to ignore.
At hackajob, we’re right in the middle of that shift. We’re building AI agents that help companies and job seekers in the ever-changing hiring landscape. And it’s forced us to unlearn a lot of “good” product habits.
Not because they were wrong, but because they were designed for work with far more certainty.
Over the last decade, I’ve seen the same dynamics appear whenever teams try to:
In all of those cases, the same things happen:
AI simply compresses these feedback loops. It exposes uncertainty faster and more publicly than most technologies before it.
So while this post uses AI examples, the principles apply far more broadly.
In traditional product development, estimation is difficult but usually achievable. You roughly understand the system you’re building and where the risks are likely to sit.
High-uncertainty work breaks that assumption.
A good example for us was building an agent that could join briefing calls between recruiters and hiring managers. The goal was to capture rich context and automatically create a job with accurate requirements on hackajob.
Within day one, we had a prototype:
Internally, it looked incredibly promising.
Based on that, we estimated it would take about a week to productionise. We even lined up people to beta test it.
That estimate was wrong.
Not because the prototype didn’t work - but because it hid the real complexity. When we tried to extend the value (asking follow-up questions, capturing missing information, surfacing anti-patterns), the system became far harder to manage.
What felt like incremental improvements turned into a month-long exploration - and eventually, a rethink.
This is what happens when you’re building in uncertainty: feasibility often isn’t knowable until you’re already deep inside the problem. Estimations stop being forecasts and start being bets.
Prototypes are essential. They help you see what might be possible.
But in high-uncertainty work, they also create false confidence.
In our case, the early prototype convinced us the hardest part was done. In reality, it was the easiest.
The complexity showed up later:
None of this appears in a demo.
Eventually, we realised the “active” agent wasn’t the right solution — not because the AI wasn’t capable, but because the interaction model was too complex for the value it delivered.
So we stepped back and simplified:
That version works extremely well. But it’s not the version the prototype sold us.
This is why prototypes lie. They show what’s possible, not what’s sustainable.
The biggest shift we had to make wasn’t technical, it was cultural.
Traditional product teams are trained to eliminate uncertainty early. With work like this, that goal becomes unrealistic.
So we changed how we frame progress.
We now clearly distinguish between:
This framing created clarity across the business:
We also changed how we measure progress.
Instead of asking, “When will this be done?”, we ask:
Progress beats certainty.
If you’re building anything genuinely new - AI or otherwise - a few lessons are worth holding onto:
AI hasn’t created this problem. It’s just made it visible.
And once you recognise that, these principles stop being “AI-specific” and start becoming a better way to build anything where the answers aren’t known upfront.