Save time and effort sourcing top tech talent

Software Engineer III Python, Spark,Data pipelines

Mumbai, Maharashtra, IND
Data Engineer
Actively hiring

Software Engineer III Python, Spark,Data pipelines

JPMorganChase
Mumbai, Maharashtra, IND
Data Engineer
JPMorganChase
Actively hiring

hackajob is partnering with JPMorganChase to fill this position. Create a profile to be automatically considered for this role—and others that match your experience.

 
JOB DESCRIPTION

Be part of a dynamic team where your distinctive skills will contribute to a winning culture and team.


 As a Data Engineer III at JPMorgan Chase within the Corporate Technology, you serve as a seasoned member of an agile team to design and deliver trusted data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. You are responsible for developing, testing, and maintaining critical data pipelines and architectures across multiple technical areas within various business functions in support of the firm’s business objectives. 

Job responsibilities

  • Supports review of controls to ensure sufficient protection of enterprise data
  • Advises and makes custom configuration changes in one to two tools to generate a product at the business or customer request
  • Updates logical or physical data models based on new use cases
  • Frequently uses SQL and understands NoSQL databases and their niche in the marketplace
  •  

Required qualifications, capabilities, and skills

  • Formal training or certification on data engineering concepts and 3+ years applied experience
  • 5 years experience in advanced proficiency in  programming languages including Python, Spark
  • Advanced at SQL (e.g., joins and aggregations)
  • Advanced proficiency in  cloud data lakehouse platform such as AWS data lake services, Databricks or Hadoop, at least one relational data store such as Postgres, Oracle or similar, and at least one NOSQL data store such as Cassandra, Dynamo, MongoDB or similar
  • Advanced proficiency in at least one scheduling/orchestration tool such as Airflow, AWS Step Functions or similar
  • Proficiency in Unix scripting, data structures, data serialization formats such as JSON, AVRO, Protobuf, or similar, big-data storage formats such as Parquet, Iceberg, or similar, data processing methodologies such as batch, micro-batching, or stream
  • Significant experience with statistical data analysis and ability to determine appropriate tools and data patterns to perform analysis
  • Experience customizing changes in a tool to generate product
Preferred qualifications, capabilities, and skills
  • Proficiency in IaC (preferably Terraform, alternatively AWS cloud formation).
  • Strong Python and Spark
ABOUT US

hackajob is partnering with JPMorganChase to fill this position. Create a profile to be automatically considered for this role—and others that match your experience.

 

Upskill

Level up the hackajob way. Verify your skills, learn brand new ones and test your ability with Pathways, our learning and development platform.

Ready to reach your potential?