Save time and effort sourcing top tech talent

Mid-Senior Data Engineer

Shirebrook, Mansfield, UK
Data Engineer Full Stack Python Developer Python Developer
Actively hiring

Mid-Senior Data Engineer

Frasers Group Tech
Shirebrook, Mansfield, UK
Data Engineer Full Stack Python Developer Python Developer
Frasers Group Tech
Actively hiring

hackajob is partnering with Frasers Group Tech to fill this position. Create a profile to be automatically considered for this role—and others that match your experience.

 

 

We are currently looking for a Mid-Senior Data Engineer to join us in our growing Data Engineering team; to help develop, maintain, support, and integrate our growing number of data systems. You will be instrumental in designing, building, and maintaining robust and scalable data pipelines that power our operational subscribers and analytical platforms. You will work with a diverse range of data sources, integrating both real-time streams and micro-batches, connecting with varied end points to move data at speed and at scale.

The right candidate will have a wealth of knowledge in the data world with a strong focus on Databricks, and will be keen to expand upon their existing knowledge, learning new technologies along the way as well as supporting both future and legacy technologies and processes.

You will be coding, testing, and documenting new or modified data systems; creating scalable, repeatable, secure pipelines and applications for both operational data and analytics, both internally and externally to the business. You will grow our capabilities, solving new data problems and challenges every day.

Key Responsibilities:

  • Design, Build, and Optimise Real-Time Data Pipelines: Develop and maintain robust and scalable stream and micro-batch data pipelines using Databricks, Spark (PySpark/SQL), and Delta Live Tables.
  • Implement Change Data Capture (CDC): Implement efficient CDC mechanisms to capture and process data changes from various source systems in near real-time.
  • Master the Delta Lake: Leverage the full capabilities of Delta Lake, including ACID transactions, time travel, and schema evolution, to ensure data quality and reliability.
  • Champion Data Governance with Unity Catalog: Implement and manage data governance policies, data lineage, and fine-grained access control using Databricks Unity Catalog.
  • Enable Secure Data Sharing with Delta Sharing: Design and implement secure and governed data sharing solutions to distribute data to both internal and external consumers without data replication.
  • Integrate with Web Services and APIs: Develop and manage integrations to push operational data to key external services, as well as internal APIs.
  • Azure Data Ecosystem: Work extensively with core Azure data services, including Azure Data Lake Storage (ADLS) Gen2, Azure Functions, Azure Event Hubs, and CI/CD.
  • Data Modelling and Warehousing: Apply strong data modelling principles to design and implement logical and physical data models for our analytical and operational data stores.
  • Monitoring and Performance Tuning: Proactively monitor data pipeline performance, identify bottlenecks, and implement optimizations to ensure low latency and high throughput.
  • Collaboration and Mentorship: Collaborate with cross-functional teams, including software engineers, data scientists, and product managers, and mentor junior data engineers

 

Qualifications

 

What We're Looking For:

  • Proven Databricks Expertise: Extensive hands-on experience with the Databricks Lakehouse Platform is essential.
  • Strong Spark and Python/SQL Skills: Proficiency in Spark programming (PySpark and/or Scala) and expert-level SQL skills.
  • Real-Time Data Processing: Demonstrable experience in building and managing stream and micro-batch processing pipelines using technologies like Spark Structured Streaming or Delta Live Tables.
  • Deep Understanding of Delta Lake Concepts: Thorough knowledge of Delta Lake architecture and features (ACID transactions, time travel, optimization techniques).
  • Experience with Databricks Advanced Features: Practical experience with Change Data Capture (CDC), Unity Catalog for data governance, and Delta Sharing for secure data collaboration.
  • Web Service and API Integration: A proven track record of integrating data pipelines with external web services and REST APIs.
  • Solid Azure Experience: Strong experience with core Azure data services (ADLS Gen2, Event Hubs, Azure Functions).
  • Data Modelling and Warehousing Fundamentals: A strong understanding of data modelling concepts (e.g., Kimball, Inmon) and experience with data warehousing principles.
  • CI/CD and DevOps Mindset: Experience with CI/CD pipelines for data engineering workloads using tools like Git.
  • Excellent Problem-Solving and Communication Skills: The ability to troubleshoot complex data issues and communicate technical concepts effectively to both technical and non-technical audiences.

Desirable

  • GCP and BigQuery Knowledge: Utilize your experience with GCP and BigQuery for analytical data workloads and to ensure seamless interoperability within our multi-cloud strategy.

 

hackajob is partnering with Frasers Group Tech to fill this position. Create a profile to be automatically considered for this role—and others that match your experience.

 

Upskill

Level up the hackajob way. Verify your skills, learn brand new ones and test your ability with Pathways, our learning and development platform.

Ready to reach your potential?