Save time and effort sourcing top tech talent

Senior Data Engineer (Platform)

Newcastle, United Kingdom
Data Engineer Platform Engineer Cloud Engineer DevOps Engineer
Actively hiring

Senior Data Engineer (Platform)

Sage
Newcastle, United Kingdom
Data Engineer Platform Engineer Cloud Engineer DevOps Engineer
Sage
Actively hiring

hackajob is partnering with Sage to fill this position. Create a profile to be automatically considered for this role—and others that match your experience.

 

As a Senior Data Engineer in the Platform squad, you will play a pivotal role in designing, building, and maintaining the foundational infrastructure, tooling, and standards that every data engineering team depends on. This platform underpins the Sage DataHub, which is central to Sage’s product growth ambitions and provides the trusted, high‑quality data foundation required for reliable, scalable, and responsible AI.



You will help lead and shape the AWS platform, infrastructure‑as‑code libraries, CI/CD patterns, and golden paths that enable teams to deliver at pace, while ensuring the DataHub can power agentic and AI‑driven experiences for our customers with confidence and trust.



Working as a key contributor to the Platform squad, your ability to see work through end‑to‑end — from design to production — will be critical. You will act as a trusted advisor to other squads, setting the standard for what “done well” looks like at the platform and DataHub level, and helping teams ship faster, more safely, and with greater confidence.



This is a hybrid role, requiring 3 days per week based in our Newcastle office.


What You’ll Do:



• Architect and maintain the AWS platform — account structure, security boundaries, cost governance, and resource tagging — ensuring the environment stays secure, observable, and cost-efficient.


• Collaborate with the squad lead to evolve the IaC library: write reusable CDK and Terraform modules that other squads reach for first when building new infrastructure.


• Design and maintain golden paths for Lambda, Kafka (MSK), and Flink — opinionated, well-documented patterns that reduce cognitive load for the broader data engineering team.


• Build and maintain common CI/CD pipeline patterns and GitHub Actions infrastructure shared across all squads.


• Set and enforce repository standards: branching strategies, code review norms, and release processes.


• Help lead and develop the shared observability platform (CloudWatch and New Relic) — contribute to standards, support platform management, and help pipeline teams instrument their workloads effectively.


• Drive cost allocation and tagging governance across the AWS estate, enabling showback and chargeback by squad or product.


• Contribute to platform onboarding: help shape access provisioning, self-service runbooks, and the tooling that makes joining the platform straightforward.


• Contribute to architecture and design decisions, mentoring engineers across squads and acting as the go-to voice on platform-level concerns.


• Care deeply about the work you deliver end-to-end: from design through to living in production, with the monitoring in place to prove it.


• Leverage AI tooling — including Claude and GitHub Copilot — to accelerate development, improve code quality, and find innovative solutions to complex platform challenges.


• Design and build AI agents and tooling that enable the rapid growth and evolution of the DataHub, automating complex workflows and accelerating delivery across the team.


• Drive improvements in developer productivity — lead time, deployment frequency, and change failure rate — making measurable progress quarter on quarter.




What you’ll be working on:



We hire technically capable people so whilst we use the below technologies, we do not expect expert knowledge in all of them — you will be fully supported if you’re able to demonstrate a technical and passionate mindset for solving complex problems:



• AWS — the core of our production infrastructure. We make heavy use of MSK, Managed Flink, Lambda, Glue, S3, Lake Formation, CloudWatch, and CDK.


• CDK (TypeScript) and Terraform for infrastructure as code and reusable module development.


• GitHub and GitHub Actions for source control, CI/CD automation, and shared pipeline patterns.


• New Relic for the shared observability and alerting platform.


• Python and TypeScript as the primary engineering languages across the team.


• Apache Iceberg and the wider AWS lakehouse ecosystem.


• AI tooling — including Claude and GitHub Copilot — for rapid prototyping, code generation, and development acceleration.


• AI agent frameworks and LLM APIs for building intelligent tooling and automation in support of DataHub growth.




You should apply if:



• You have strong hands-on experience with AWS, particularly building and maintaining infrastructure in a production data or platform context.


• You are comfortable working deeply with infrastructure as code and care about the quality and reusability of what you write.


• You have experience designing CI/CD systems and have a clear view of what good looks like.


• You think deeply about developer experience — you have made other engineers’ lives measurably better, and you have the examples to show for it.


• You can operate across abstraction levels — from a detailed PR review to a cross-squad architectural discussion.


• You are comfortable working in a fast-paced, delivery-first environment and thrive when given broad responsibility and autonomy.


• Experience with streaming infrastructure (Kafka, Flink, or similar) is a strong plus.


• You are passionate about seeing your work through from inception to it living and breathing in production.



#LI-MD1

hackajob is partnering with Sage to fill this position. Create a profile to be automatically considered for this role—and others that match your experience.

 

Upskill

Level up the hackajob way. Verify your skills, learn brand new ones and test your ability with Pathways, our learning and development platform.

Ready to reach your potential?