Save time and effort sourcing top tech talent

Lead Software Engineer Spark, Pyspark ,AWS

Hyderabad, Telangana, IND
Data Engineer Platform Engineer Cloud Architect Data Architect Staff Engineer
Actively hiring

Lead Software Engineer Spark, Pyspark ,AWS

JPMorganChase
Hyderabad, Telangana, IND
Data Engineer Platform Engineer Cloud Architect Data Architect Staff Engineer
JPMorganChase
Actively hiring

hackajob is partnering with JPMorganChase to fill this position. Create a profile to be automatically considered for this role—and others that match your experience.

 
JOB DESCRIPTION

We have an opportunity to impact your career and provide an adventure where you can push the limits of what’s possible.

As a Vice President, Senior Data Engineer at JPMorganChase within the Consumer and Community Banking Technology  you are an integral part of an agile team that designs, builds, and delivers secure, stable, and scalable enterprise data platforms. In this role, you will lead the architecture and implementation of end-to-end data solutions that enable enterprise analytics, real-time processing, and AI-driven intelligent operations. As a core technical contributor and leader, you will partner closely with engineers, data scientists, analysts, and business stakeholders to deliver high-quality data products that drive measurable business outcomes.

Job responsibilities

  • Designs and architects end-to-end data solutions (from ingestion through transformation, storage, and analytics) that are scalable, reliable, and high-performing at enterprise scale
  • Defines and enforces data architecture standards and best practices for both batch and real-time processing pipelines
  • Builds and optimizes scalable data ingestion, transformation, and analytics solutions using Apache Spark, AWS Glue, and Apache Iceberg
  • Develops and manages batch and streaming pipelines leveraging Apache Kafka and Apache Flink for real-time data processing
  • Implements Data Lake architectures, including data organization, partitioning strategies, governance, and efficient access patterns
  • Designs and deploys cloud-native data solutions using AWS services such as S3, Lambda, EKS (Elastic Kubernetes Service), and Step Functions
  • Implements Infrastructure as Code (IaC) practices for reproducible, secure, and scalable deployments
  • Identifies opportunities to eliminate or automate remediation of recurring issues and improves operational stability through DataOps, observability, and monitoring frameworks
  • Builds automated, self-healing data operations using agentic frameworks and AI-driven workflows for intelligent monitoring, optimization, and remediation
  • Provides technical leadership and mentorship to data engineering team members, fostering strong engineering practices and operational excellence
  • Collaborates with cross-functional partners to translate requirements into robust technical solutions and establishes strong data quality standards and controls

Required qualifications, capabilities, and skills

  • Formal training or certification on software engineering concepts and 5+ years applied experience
  • Demonstrated ability to architect and deliver large-scale enterprise data platforms from ingestion to analytics
  • Deep expertise in Apache Spark, AWS Glue, Apache Iceberg, and Data Lake architectures
  • Proven experience designing and operating batch and real-time pipelines using Apache Kafka and Apache Flink
  • Strong hands-on experience with AWS services including S3, Lambda, EKS, and Step Functions
  • Experience building automated solutions using agentic frameworks and AI-driven workflows for intelligent operations
  • Advanced proficiency in Python, Scala, or Java, with strong SQL skills
  • Practical experience with modern data engineering practices including DataOps, data observability, and data governance
  • Bachelor’s degree in Computer Science, Engineering, Information Systems, or a related technical field

Preferred qualifications, capabilities, and skills

  • Master’s degree in a related technical field
  • Experience with additional streaming technologies and event-driven architectures 
  • Experience with containerization and microservices (e.g., Docker, Kubernetes)
  • AWS certifications (Solutions Architect, Data Analytics, Machine Learning)
  • Experience in financial services or other highly regulated industries
ABOUT US

hackajob is partnering with JPMorganChase to fill this position. Create a profile to be automatically considered for this role—and others that match your experience.

 

Upskill

Level up the hackajob way. Verify your skills, learn brand new ones and test your ability with Pathways, our learning and development platform.

Ready to reach your potential?