hackajob is partnering with NTT DATA UK to fill this position. Create a profile to be automatically considered for this role—and others that match your experience.
Position Title:
Databricks Data Engineer
Organisation / Function:
Data & AI Practice
Job Summary / Purpose
We are seeking a highly skilled Databricks Data Engineer to join our Data & AI practice. The
successful candidate will have deep expertise in building scalable data pipelines, optimizing
Lakehouse architectures and enabling advanced analytics and AI use cases on the
Databricks platform. This role is critical in building and optimising modern data
ecosystems that enable data-driven decision making, advanced analytics, and AI capabilities
for our clients.
As a trusted practitioner, you will design and implement robust ETL/ELT workflows,
integrate real-time and batch data sources, and enable secure, well-governed data products
and pipelines. You will thrive in a collaborative, client-facing environment, with a passion
for solving complex data challenges, driving innovation and ensuring the seamless delivery
of data solutions.
Primary Responsibilities
Client Engagement & Delivery
Data Pipeline Development (Batch and Streaming)
Databricks & Lakehouse Architectures
Data Modelling & Optimisation (Delta Lake, Medallion architecture)
Collaboration & Best Practices
Quality, Governance & Security
Business Relationships
Solution Architects
Data Engineers, Developers, ML Engineers, and Analysts
Client stakeholders up to Head of Data Engineering, Chief Data Architect, and Analytics
leadership
Competencies / Critical Skills
Key Competencies
Proven experience in data engineering and pipeline development on Databricks and
cloud-native platforms.
Strong consulting values with ability to collaborate effectively in client-facing
environments.
Hands-on expertise across the data lifecycle: ingestion, transformation, modelling,
governance, and consumption.
Strong problem-solving, analytical, and communication skills.
Experience leading or mentoring teams of engineers to deliver high-quality scalable
data solutions.
Technical Expertise
Deep expertise with the Databricks platform (Spark/PySpark/Scala, Delta Lake, Unity
Catalog, MLflow).
Proficiency in ETL/ELT tools such as DBT, Matillion, Talend, or equivalent.
Strong SQL and Python (or equivalent language) skills for data manipulation and
automation.
Hands-on experience with cloud platforms (AWS, Azure, GCP).
Familiarity with Databricks Workflows and other orchestration tools.
Knowledge of data modelling methodologies (star schemas, Data Vault, Kimball, Inmon).
Familiarity with medallion architectures, data lakehouse principles and distributed data
processing.
Experience with version control tools (GitHub, Bitbucket) and CI/CD pipelines.
Understanding of data governance, security, and compliance frameworks.
Exposure to AI/ML workloads desirable.
Experience, Qualifications, and Education
Experience: Minimum 5–8 years in data engineering, data warehousing, or data
architecture roles, with at least 3+ years working with Databricks.
Education: University degree required.
Preferred: BSc/MSc in Computer Science, Data Engineering, or related field
Databricks certifications (Data Engineer Professional) highly desirable.
Measures of Success
Delivery of high-performing, scalable, and secure data pipelines aligned to client
requirements.
High client satisfaction and successful adoption of Databricks-based solutions.
Demonstrated ability to innovate and improve data engineering practices.
Contribution to the growth of the practice through reusable assets, accelerators, and
technical leadership.
hackajob is partnering with NTT DATA UK to fill this position. Create a profile to be automatically considered for this role—and others that match your experience.
Level up the hackajob way. Verify your skills, learn brand new ones and test your ability with Pathways, our learning and development platform.