Sourcing as a channel, not a feature.

Software Engineer III - AWS Data Engineer

Hyderabad, Telangana, IND
Data Engineer Platform Engineer
Actively hiring

Software Engineer III - AWS Data Engineer

JPMorganChase
Hyderabad, Telangana, IND
Data Engineer Platform Engineer
JPMorganChase
Actively hiring

hackajob is partnering with JPMorganChase to fill this position. Create a profile to be automatically considered for this role—and others that match your experience.

 
JOB DESCRIPTION

We have an exciting and rewarding opportunity for you to advance your software engineering career. Join us to build innovative data solutions and accelerate engineering productivity Guidelines.docx.

As a Senior Data Engineer at JPMorgan Chase within the Corporate Technology team, you design and deliver robust, scalable data products using Databricks/Spark on AWS.. You help modernize data processing platforms and pioneer LLM-assisted development, contributing to secure and reliable solutions that drive business impact Guidelines.docx.

Job responsibilities

  • Design and deliver Databricks/Spark pipelines on AWS using Delta Lake/Lakehouse patterns
  • Build and maintain secure, high-quality production code in Python/PySpark aligned to enterprise security best practices
  • Implement CI/CD-first delivery using infrastructure-as-code (Terraform/CloudFormation) and automated testing for repeatable deployments 
  • Produce architecture and design artifacts for complex applications, ensuring performance, resiliency, and security constraints are met 
  • Gather, analyze, and synthesize data to drive continuous improvement of software applications and systems 
  • Proactively identify hidden problems and patterns in data to improve coding hygiene and system architecture

Required qualifications, capabilities, and skills

  • Formal training or certification on software engineering concepts and eight years applied experience 
  • Hands-on practical experience in system design, application development, testing, and operational stability 
  • Proficient in coding in Python and PySpark 
  • Experience in developing, debugging, and maintaining code in a large corporate environment with Databricks/Spark and database querying languages 
  • Overall knowledge of the Software Development Life Cycle
  • Solid understanding of agile methodologies, CI/CD, application resiliency, and security 
  • Demonstrated ability to use Copilot/LLM-assisted workflows effectively while maintaining quality and security through review and automation 

Preferred qualifications, capabilities, and skills

  • Experience with agentic automation or LLM tooling patterns (e.g., task-oriented agents for incident triage, deployment validation, data quality checks, or developer enablement) Raw Posting.docx
  • Familiarity with Data Mesh, Airflow, and/or ThoughtSpot Raw Posting.docx
  • AWS and/or Databricks certifications (e.g., AWS SAA / Developer Associate / Data Analytics Specialty, Databricks certification) Raw Posting.docx
  • Design, build, and operate scalable Databricks/Spark data products on AWS, with a strong emphasis on Agentic AI patterns, LLM-enabled developer workflows, and Copilot-assisted delivery
ABOUT US

hackajob is partnering with JPMorganChase to fill this position. Create a profile to be automatically considered for this role—and others that match your experience.

 

Upskill

Level up the hackajob way. Verify your skills, learn brand new ones and test your ability with Pathways, our learning and development platform.

Ready to reach your potential?