Save time and effort sourcing top tech talent

Building AI at hackajob: why our product process had to change — and why we’re sharing it openly

The AI wave hasn’t just changed what products can do. It’s changed how products get built. And, frankly, the companies that pretend it hasn’t are the ones falling behind.

At hackajob, we’re right in the middle of that shift. We’re building AI agents that help teams get more from intake calls, assess candidate–role fit, automate sourcing workflows, and support recruiters with genuinely useful intelligence. And along the way, we’ve realised something very simple:

AI turns product development into a completely different game.

Competitors aren’t talking about the challenges. We’re choosing to be open about what’s worked, what hasn’t, and how our process has evolved - because with more transparency, we can learn together what the future of developing products looks like.

Here’s what we’ve learned.


With AI, you can build things you literally couldn’t build before

Traditional engineering is built around structured data, defined schemas, and clear boundaries.

AI doesn’t care about any of that.

Want to understand whether the responsibilities in a CV map to a job description? Traditionally, you’d need taxonomies, filters, data pipelines and months of engineering.

With AI, you can put the raw content into a model, add guardrails, test a few dozen examples, and suddenly you’ve got an early signal. Not perfect, but working.

That shift opens doors:

  • brief-collapsing systems
  • adaptive screening agents
  • automated sourcing flows
  • contextual matching

All things that were previously “too big” to build without a year of effort.

This is where AI stops being a feature and becomes an enabler.


"Show, don’t tell" is more important than ever

AI doesn’t reward long debates. It rewards quick experimentation… and getting something into the world to prove the value it could add.

So we’ve doubled down on:

  • hackathons
  • one-sprint tech spikes
  • low-code prototypes
  • engineers building fast proofs of concept
  • even non-engineers jumping into prototyping

The point isn’t polish. It’s clarity.

Can it work?

Is it useful?

Does it behave consistently enough to invest in?

A scrappy prototype delivers more truth than a month of planning … especially when you’ll never be able to pre-empt all the risks or uncertainty with an AI system.


We’re explicit about what’s an experiment vs. what’s ready for general release

One of the biggest mistakes teams make with AI is expecting clarity too early … and expecting too many people to be comfortable with ambiguity.

So we avoid that entirely.

Our lifecycle:

Experiment → Early Release → General Release

  • Experiment: feasibility + value testing
  • Early Release: real usage, humans-in-the-loop, active feedback loops
  • GA: proven stability through products meeting success metrics

This clarity helps sales, customer support, clients, and internal teams understand momentum and expectations. No one is left guessing.


Human-in-the-loop isn’t a checkbox…  It’s our default

We’re not in the business of making fully autonomous hiring decisions.

We’re in the business of making humans faster, smarter, and more confident - both job seekers and prospective employers.

Our rules are simple:

  • AI can summarise, assess, and shortlist
  • Humans approve decisions
  • AI doesn’t alter candidate data
  • AI handles repetition; humans handle judgment

It makes the system both faster and safer. And recruiters trust it more because they stay in control.


The hardest challenge has been cultural, not technical

Selling or supporting deterministic software is fundamentally different from supporting AI systems.

Our sales and customer success teams were used to certainty.

AI… doesn’t do certainty.

So we invest heavily in:

  • enablement
  • hands-on demos
  • "give it a try" sessions
  • shadowing internal usage
  • scenarios, edge cases, and safe failure modes
  • giving stability where possible, clarity always

We’ve had to train people not just how the tools work, but how to think in a world where outputs vary. And the only reliable way to build comfort is through real usage, not perfect demos.

We’ve even moved away from polished demo environments. Now we use the product live when talking to clients. It’s more honest and, ironically, builds more confidence and trust.


Our MVPs look completely different now

AI tempts you to build everything at once.

We’ve had to learn restraint.

Our AI MVPs focus on:

  • proving value before building features
  • accepting that prompts will evolve
  • avoiding rigid architectures early
  • releasing earlier than feels comfortable
  • adapting as we learn from real behaviours

The paradox of AI development is:

It increases what’s possible and also demands more focus.


Matching & sourcing are entering a new era

AI changes matching in a fundamental way.

Instead of relying on rigid filters or keyword matching, we can now interpret:

  • responsibilities
  • outcomes
  • inferred skills
  • behavioural signals
  • seniority cues
  • patterns across job descriptions

It’s more contextual, faster, and often more accurate. The golden rule still applies: garbage in, garbage out. But the ceiling of what’s possible has shifted dramatically.

Our long-term goal is simple:

An agentic matching system that improves continuously as the market evolves.


Why we’re sharing this openly

Most competitors are quiet about how they build AI.

We see that as a missed opportunity.

We want hackajob to be:

  • transparent
  • ethical
  • collaborative
  • a convener, not just a vendor
  • a contributor to industry learning

That’s why we’re publishing articles like this, sharing insights, and building in the open.

It’s good for trust.

Good for engineering credibility.

And good for pushing the industry forward.


Closing thoughts

AI has rewritten the rules of product development.

The teams that embrace uncertainty, experiment openly, involve humans intelligently, and build with transparency will be the ones who define what comes next.

We’re all figuring this out together.

At hackajob, we’re committed to doing that learning in public.


FAQ

How is building AI products different from traditional software development?

Traditional software relies on structured data and predictable logic. AI lets us work directly with raw, unstructured content and get early signals with far less upfront engineering. That makes it possible to validate ideas in days instead of months and build systems that simply weren’t feasible before.

Why does hackajob use an Experiment → Early Release → General Release lifecycle?

AI introduces uncertainty, and you can't eliminate that with planning alone. Our lifecycle makes expectations explicit: experiments test feasibility, early releases gather real usage and feedback, and GA is reserved for features that meet stability and success metrics. It keeps teams aligned and prevents confusion about what’s ready for production.

Why is human-in-the-loop the default for all AI features?

Hiring decisions have real consequences, so AI acts as a support layer, not a replacement. The model can summarise, assess and suggest, but humans confirm every decision. This balance gives recruiters speed and clarity, while maintaining fairness, control and trust.

How is AI changing matching and sourcing at hackajob?

Instead of relying on keyword filters, AI can interpret responsibilities, outcomes, inferred skills and context from both CVs and job descriptions. That means more accurate matches earlier in the process, and it lays the groundwork for agentic systems that improve continuously as the market evolves.