AI isn’t just another roadmap item. It fundamentally changes how products get built and the faster teams accept that, the better their chances of building something that actually works.
But to be clear upfront: this isn’t the first time I’ve seen this happen.
After building startups for the last 10 years, one pattern shows up again and again:
any time you’re working in a space with a high level of unknowns, the traditional product playbook starts to break down.
AI just makes that impossible to ignore.
At hackajob, we’re right in the middle of that shift. We’re building AI agents that help companies and job seekers in the ever-changing hiring landscape. And it’s forced us to unlearn a lot of “good” product habits.
Not because they were wrong, but because they were designed for work with far more certainty.
This isn’t just about AI, it’s about uncertainty
Over the last decade, I’ve seen the same dynamics appear whenever teams try to:
- use technology they don’t fully understand yet
- design a completely new user experience
- build something without a clear precedent
- enter a space where behaviour isn’t predictable
In all of those cases, the same things happen:
- plans become fragile
- estimates become guesses
- early confidence turns out to be misleading
AI simply compresses these feedback loops. It exposes uncertainty faster and more publicly than most technologies before it.
So while this post uses AI examples, the principles apply far more broadly.
1. Estimations die
In traditional product development, estimation is difficult but usually achievable. You roughly understand the system you’re building and where the risks are likely to sit.
High-uncertainty work breaks that assumption.
A good example for us was building an agent that could join briefing calls between recruiters and hiring managers. The goal was to capture rich context and automatically create a job with accurate requirements on hackajob.
Within day one, we had a prototype:
- it joined a call
- listened to the conversation
- asked a question
- produced a rich, detailed job on hackajob
Internally, it looked incredibly promising.
Based on that, we estimated it would take about a week to productionise. We even lined up people to beta test it.
That estimate was wrong.
Not because the prototype didn’t work - but because it hid the real complexity. When we tried to extend the value (asking follow-up questions, capturing missing information, surfacing anti-patterns), the system became far harder to manage.
What felt like incremental improvements turned into a month-long exploration - and eventually, a rethink.
This is what happens when you’re building in uncertainty: feasibility often isn’t knowable until you’re already deep inside the problem. Estimations stop being forecasts and start being bets.
2. Prototypes lie
Prototypes are essential. They help you see what might be possible.
But in high-uncertainty work, they also create false confidence.
In our case, the early prototype convinced us the hardest part was done. In reality, it was the easiest.
The complexity showed up later:
- multiple people speaking at once
- deciding when an agent should speak
- managing interruptions and timing
- avoiding awkward or disruptive behaviour
- handling edge cases in live conversations
None of this appears in a demo.
Eventually, we realised the “active” agent wasn’t the right solution — not because the AI wasn’t capable, but because the interaction model was too complex for the value it delivered.
So we stepped back and simplified:
- a passive agent that listens
- summarises at the end of the call
- generates the job description after a trigger
That version works extremely well. But it’s not the version the prototype sold us.
This is why prototypes lie. They show what’s possible, not what’s sustainable.
3. Certainty evaporates
The biggest shift we had to make wasn’t technical, it was cultural.
Traditional product teams are trained to eliminate uncertainty early. With work like this, that goal becomes unrealistic.
So we changed how we frame progress.
We now clearly distinguish between:
- Experiments: learning-focused, supported directly by the product team
- Early Release: real users, humans-in-the-loop, active feedback
- General Release: stable, sellable, supportable at scale
This framing created clarity across the business:
- Sales know what not to sell
- Support knows what to expect
- Clients understand what they’re getting
- Product teams feel safer exploring
We also changed how we measure progress.
Instead of asking, “When will this be done?”, we ask:
- What can you show tomorrow?
- What did we learn today?
- What decision does this unlock next?
Progress beats certainty.
What this means for product leaders
If you’re building anything genuinely new - AI or otherwise - a few lessons are worth holding onto:
- Uncertainty isn’t a failure of planning; it’s a property of the work
- Familiar human interaction patterns reduce friction
- “Good enough” is a strategic decision, not a compromise
- Feedback loops matter more than roadmaps
- Products in uncertain spaces must be designed to evolve
AI hasn’t created this problem. It’s just made it visible.
And once you recognise that, these principles stop being “AI-specific” and start becoming a better way to build anything where the answers aren’t known upfront.