Save time and effort sourcing top tech talent

Software Engineer III - Python, PySpark, Databricks, AWS

Bengaluru, Karnataka, IND
Any
Actively hiring

Software Engineer III - Python, PySpark, Databricks, AWS

JPMorganChase
Bengaluru, Karnataka, IND
Any
JPMorganChase
Actively hiring

hackajob is partnering with JPMorganChase to fill this position. Create a profile to be automatically considered for this role—and others that match your experience.

 
JOB DESCRIPTION

We have an exciting opportunity for you to advance your data engineering career and make a meaningful impact at JPMorganChase.

As a Software Engineer III at JPMorgan Chase within Corporate Technology, you design and deliver high-performance data solutions that power the firm’s technology products. 

Job responsibilities

  • Architect, develop, and maintain high-performance ETL pipelines and data workflows using Python, PySpark, and Databricks
  • Design and implement scalable, fault-tolerant data solutions on AWS, leveraging services such as S3 and Lambda
  • Write secure, optimized code in Python and PySpark with a focus on performance and reliability
  • Develop and optimize SQL-based data models, queries, and transformations to support analytical and operational needs
  • Own and operate production data pipelines end-to-end, including monitoring, alerting, and performance optimization
  • Apply knowledge of the Software Development Life Cycle toolchain, including Git and CI/CD, to maximize automation and delivery velocity
  • Gather, analyze, and synthesize large, diverse data sets to drive data-driven decision-making

Required qualifications, capabilities, and skills

  • Formal training 3 years or certification in software engineering, data engineering, or a related technical discipline
  • Seven years of hands-on experience developing production-grade applications and data solutions in Python
  • Three years of experience building and optimizing large-scale data pipelines using PySpark
  • Proven experience designing, deploying, and managing data engineering workflows on Databricks, including Delta Lake and Unity Catalog
  • Strong hands-on experience with AWS cloud services, including S3 and Lambda
  • Proficiency in SQL for complex data querying, transformation, and performance tuning
  • Experience across the Software Development Life Cycle with exposure to agile methodologies such as CI/CD, Application Resiliency, and Security

Preferred qualifications, capabilities, and skills

  • Experience with infrastructure-as-code tools such as Terraform
  • Familiarity with data governance, data quality frameworks, and data cataloging practices
  • Exposure to real-time streaming technologies such as Kafka or Kinesis
  • Experience mentoring junior engineers and contributing to engineering best practices
ABOUT US

hackajob is partnering with JPMorganChase to fill this position. Create a profile to be automatically considered for this role—and others that match your experience.

 

Upskill

Level up the hackajob way. Verify your skills, learn brand new ones and test your ability with Pathways, our learning and development platform.

Ready to reach your potential?