Technical Product Owner – Data & AI Products (T3)
Location: Manchester or Leeds
Department: Data & Analytics / Product Management Reports to: Senior Product Manager, Data Products
About Cox Automotive UK
Cox Automotive is the world's largest automotive services organisation, providing digital, data, and physical solutions across the entire vehicle lifecycle.
Our mission is to transform how the automotive industry buys, sells, and manages vehicles through data, technology, and insight. The Data Products team is at the heart of this mission — building scalable data platforms, AI-powered products, and ML models that unlock value from Cox Automotive's extensive data ecosystem and drive measurable outcomes for our OEM clients.
About the Role
As Technical Product Owner – Data & AI Products, you will play a hands-on, cross-cutting role at the intersection of data engineering, data science, and product delivery. You will own the delivery backlog across our data and AI product layer — ensuring that data pipelines, ML models, and data products are built to production standard, are continuously monitored, and are reusable across our growing OEM client base.
This is not a traditional TPO role. You will need to operate with equal fluency across data infrastructure, machine learning model lifecycle, and product delivery — translating high-level product goals into technically precise stories while ensuring our AI and data products are trustworthy, observable, and scalable.
You will be the person who ensures the open questions don't stay open. You own the answers.
Key Responsibilities
Backlog Ownership & Agile Delivery
- Own and manage the delivery backlog across data engineering, data science, and data product workstreams, balancing business value, technical dependencies, and delivery risk.
- Write detailed, unambiguous user stories with clear acceptance criteria, non-functional requirements, and data quality considerations.
- Ensure stories are technically refined, estimated, and sprint-ready — working closely with data engineers, data scientists, and software engineers.
- Ensure high-quality, production-ready outputs are delivered each sprint, with particular attention to model readiness and data pipeline reliability.
ML Model Lifecycle Ownership
- Own the Model Development Lifecycle (MDLC) — defining and maintaining the process by which models are developed, evaluated, deployed, monitored, and retrained.
- Ensure every model going to production has a defined model card: what it does, what data it needs, its known limitations, and what triggers a retrain.
- Work with data scientists to define the model API contract for each product — agreeing the exact inputs, outputs, confidence scores, and explanation fields before engineering builds against it.
- Champion the rules-before-models approach where appropriate — ensuring early product iterations can ship and generate learning data while ML models mature.
- Own the sequencing decision between rules-based and ML-based recommendations, in collaboration with the Lead Data Scientist and Senior Product Manager.
Data Product Health & Observability
- Define and own the three-layer monitoring framework across all data products:
- Business outcome metrics (recommendation acceptance rate, RPI impact, days-to-sale delta)
- Model health metrics (prediction error, confidence calibration, feature drift, concept drift)
- Data quality metrics (Golden Record completeness, source freshness, VIN match rates, exception rates)
- Ensure baseline measurements are captured before any product goes live — no product ships without a measurable before/after.
- Own the escalation path when model performance degrades or data quality drops below threshold.
- Drive tooling decisions for model monitoring and drift detection (e.g. MLflow, Evidently) — and ensure the team does not build monitoring infrastructure from scratch.
Shared Platform & Reusability
- Ensure data products are built as shared capabilities from day one — not rebuilt for every new OEM client.
- Own and maintain the three-layer architecture across data products:
- Platform layer (OEM-agnostic MLOps infrastructure, Golden Record, taxonomy)
- Model layer (universal models reusable across OEMs; OEM-specific models where required)
- Configuration layer (OEM-specific playbooks, guardrails, and business rules)
- Ensure the recommendation API architecture supports swapping between shared and OEM-specific models transparently.
- Make the shared platform layer visible on the roadmap with its own milestones — protecting it from being deprioritised under delivery pressure.
Technical Product Ownership
- Act as the primary product counterpart to data engineers, analytics engineers, data scientists, and platform teams.
- Shape and refine requirements relating to:
- Data pipelines, ingestion, and orchestration
- ML model training, evaluation, and deployment pipelines
- APIs and data services (including the Recommendation API and Marketplace integration)
- Golden Record data model and taxonomy integration (e.g. JATO)
- Reporting, dashboards, and analytics products
- Ensure technical design decisions align with product goals, architectural principles, and long-term scalability across a multi-OEM client base.
- Champion data quality, observability, security, and governance requirements within the backlog.
Collaboration & Alignment
- Partner with the Senior Product Manager to translate product vision, roadmaps, and OKRs into executable delivery plans.
- Collaborate closely with the Lead Data Scientist and Data Engineering Lead to ensure the model layer and data layer are always production-ready before the product layer depends on them.
- Work with the DE Product team (Andrei and Christina) to ensure the model API contract and data field set are agreed before engineering builds the UI layer.
- Align with the Vehicle & Valuations Data team (Peter McCullough) on taxonomy and market data decisions that affect model accuracy.
- Work with stakeholders across Cox Automotive brands and OEM clients to clarify requirements and manage expectations.
Stakeholder Communication
- Act as a trusted interface between technical teams and business stakeholders — able to translate model behaviour, data quality issues, and pipeline failures into language that product and commercial teams can act on.
- Clearly communicate delivery plans, technical trade-offs, model readiness, risks, and outcomes.
- Support sprint reviews and demos, ensuring outcomes are framed in terms of customer and business value.
Continuous Improvement
- Use delivery metrics, model performance data, platform health signals, and stakeholder feedback to continuously refine priorities.
- Identify opportunities to reduce technical debt, improve developer experience, and streamline data and model delivery.
- Actively contribute to agile ceremonies, retrospectives, and ways-of-working improvements — particularly as the data science and data engineering teams mature their practices.
About You
Experience
- 4–7 years' experience as a Technical Product Owner, Data Product Manager, ML Product Manager, or similar role operating across data engineering and data science teams.
- Proven experience delivering data platforms, ML-powered products, or API-driven services in an agile environment — ideally including at least one production ML deployment.
- Experience working in SaaS, marketplace, automotive, or data-heavy organisations is advantageous.
- Experience working in early-stage or scaling product teams where processes are being built from scratch is highly desirable.
Technical Capability
- Strong understanding of modern data architectures and platforms (e.g. AWS, Snowflake, Databricks, dbt).
- Practical knowledge of:
- Data pipelines, orchestration, and ingestion
- ML model development, evaluation, and productionisation
- Model monitoring, drift detection, and retraining triggers
- APIs, event-driven architectures, and webhook patterns
- Data modelling, schemas, and analytics use cases
- Familiarity with MLOps tooling (e.g. MLflow, Evidently, SageMaker, or similar).
- Familiarity with BI and visualisation tools such as Power BI, Tableau, or Looker.
- Confident working with engineers and data scientists on technical concepts, constraints, and trade-offs (without needing to code).
Agile & Delivery Skills
- Extensive experience working within Scrum or Kanban teams in a data or ML context.
- Strong backlog management, refinement, and prioritisation skills — comfortable managing backlogs that span data engineering, data science, and product simultaneously.
- Comfortable operating in environments with multiple dependencies, incomplete information, and evolving requirements.
- Understands the difference between "move fast" for UI and "move fast" for ML — and knows when each applies.
Soft Skills & Mindset
- Excellent communicator, able to bridge detailed technical discussions (model drift, pipeline failures) and senior stakeholder conversations (RPI impact, days-to-sale improvement).
- Highly organised, outcome-focused, and comfortable owning open questions rather than escalating them.
- Curious, pragmatic, and passionate about building high-quality AI and data products that solve real problems.
- Comfortable with ambiguity — able to make a good decision with incomplete information and adjust as more becomes clear.
This role offers a rare opportunity to help build the data and AI foundation of a pan-European automotive intelligence platform — from the ground up, at scale, with a real client and a clear product vision.
hackajob is partnering with Cox Automotive to fill this position. Create a profile to be automatically considered for this role—and others that match your experience.