hackajob Insider

How LexisNexis Legal & Professional builds AI that people can trust

Written by Diana Pavaloi | Mar 11, 2026 2:31:20 PM

AI has moved fast over the past few years. Faster than most organisations, teams, and processes were ever designed for. In high-stakes domains like law, that speed raises a harder question:

What does it actually take to build AI people can trust at scale?

LexisNexis® Legal & Professional has been applying AI and machine learning for decades, long before today’s wave of generative models. It provides AI-powered information, analytics, and workflows for legal, regulatory, and business professionals, helping customers increase productivity, improve decision-making, achieve better outcomes, and advance the rule of law around the world. That depth of experience shapes how the company approaches modern AI: deliberately, responsibly, and with a clear focus on real-world impact.

In the latest episode of the DevLab podcast, we spoke with Min Chen, Senior Vice President and Chief AI Officer, and Serena Wellen, Vice President of Product Management, about how LexisNexis builds and scales AI systems while prioritizing trust.

Trust is not a feature. It’s a system

Trust cannot be bolted on at the end of an AI product’s lifecycle. It has to be designed into the system from the start. Serena also notes that customer data is handled with tight governance, and user data is anonymized and sanitized to keep sessions secure and private.

As Serena explains:

"For our customers, hallucination is a potentially career-ending problem."

In legal tech, AI outputs need to be explainable, grounded, and authoritative. Large language models are powerful, but they are also probabilistic by nature. Left unchecked, they will prioritize fluent answers over correct ones.

This is why LexisNexis focuses heavily on grounding AI responses in its own authoritative content. Rather than treating hallucination as an edge case, they design systems that use approaches like citation-based verification to confirm whether a case is real or hallucinated. Serena adds that customers are encouraged to confirm responses against underlying sources and documents.

That changes how teams think about the problem. Accuracy isn’t taken for granted. It’s something that’s constantly checked, tested, and challenged.

AI quality lives on a spectrum

One important reframing comes from how LexisNexis thinks about quality. In many AI discussions, accuracy is treated as a binary state. Either the system works, or it doesn’t.

In practice, that framing breaks down quickly, as Min explains:

"There’s no such thing as one hundred percent accuracy or one hundred percent relevancy, especially in a complex, high‑standard domain like legal."

Instead of chasing a mythical perfect model, LexisNexis evaluates AI across multiple dimensions. Accuracy, relevance, authority, and trustworthiness all matter. Quality is something teams actively measure, review, and improve over time.

This approach also influences how teams talk about progress internally. Rather than asking whether an AI feature is ‘done’, the focus shifts to whether it is good enough for a specific use case, and how it can be improved safely.

Product, AI, and subject‑matter expertise move together

A big part of how LexisNexis builds reliable AI comes down to how teams are structured. Both Serena and Min emphasize building a common language and shared baseline across teams as a key enabler of effective collaboration.

Rather than separating product, data science, and domain expertise into silos, they treat them as a single unit.

"This unit of product manager, data scientist, and subject matter expert is a fundamental unit for advancing our experimentation and delivering the kind of innovation we do."

Serena describes the day-to-day flow as the product manager bringing a customer problem to the triad, who then review examples together.

This structure allows teams to move faster without sacrificing quality. Subject‑matter experts help define what ‘good’ looks like. Product managers ensure the solution fits real workflows. Data scientists focus on model behaviour and evaluation.

Crucially, trust becomes a shared responsibility, not something owned by a single function.

AI fluency is a cultural shift, not a tooling problem

As Chief AI Officer, Min’s role extends beyond models and platforms. A large part of her work is helping the wider organisation become more confident working with AI.

That confidence does not come from rolling out new tools alone.

"Helping the organization become more confident with AI is far more than technology. It’s a cultural shift in how we work, how we learn, and how we innovate together."

Min also notes that when product leaders understand how AI works, teams share a stronger common language and collaboration becomes smoother.

Teams need space to experiment, clear expectations around quality, and psychological safety to ask hard questions about AI outputs. Leaders play a critical role here, not by having all the answers, but by creating an environment where learning is continuous and responsible use is encouraged. 

Why people stay and grow

LexisNexis is often described as a company where people build long careers. That longevity is not accidental. Serena points to mission as a major driver, describing LexisNexis as advancing the rule of law around the world. She adds that when people see their work matters, it creates a powerful feedback loop.

As Serena explains, reflecting on her experience at LexisNexis:

"People come to Lexis because they’re attracted by the technology, but they stay because of the other people who work at Lexis."

Working on complex, high‑impact problems requires trust not just in systems, but in teammates. Collaboration, respect for expertise, and a shared sense of purpose all contribute to an environment where people can grow alongside the technology.

Watch the full conversation

Watch the full DevLab conversation with Min Chen and Serena Wellen to hear detailed examples of grounding, evaluation, and risk management in legal AI.

Interested in building trusted AI-powered workflow solutions used in high-stakes domains? Explore open roles at LexisNexis.