You’re ready to gain the skills and experience needed to grow within your role and advance your career — and we have the perfect software engineering opportunity for you.
As a Software Engineer II at JPMorgan Chase as a part of Corporate and Investment Bank, you are part of an agile team that works to enhance, design, and deliver the software components of the firm’s state-of-the-art technology products in a secure, stable, and scalable way. As an emerging member of a software engineering team, you execute software solutions through the design, development, and technical troubleshooting of multiple components within a technical product, application, or system, while gaining the skills and experience needed to grow within your role.
Job responsibilities:
- Design, develop, and maintain scalable data pipelines and ETL processes to support data integration and analytics.
- Frequently utilizes SQL and understands NoSQL databases and their niche in the marketplace
- Collaborate closely with cross-functional teams to develop efficient data pipelines to support various data-driven initiatives
- Implement best practices for data engineering, ensuring data quality, reliability, and performance
- Contribute to data modernization efforts by leveraging cloud solutions and optimizing data processing workflows
- Perform data extraction and implement complex data transformation logic to meet business requirements
- Leverage advanced analytical skills to improve data pipelines and ensure data delivery is consistent across projects
- Monitor and executes data quality checks to proactively identify and address anomalies. Ensure data availability and accuracy for analytical purposes
- Identify opportunities for process automation within data engineering workflows
- Communicate technical concepts to both technical and non-technical stakeholders. Deploy and manage containerized applications using Kubernetes (EKS) and Amazon ECS.
- Implement data orchestration and workflow automation using AWS step , Event Bridge. Use Terraform for infrastructure provisioning and management, ensuring a robust and scalable data infrastructure.
Required qualifications, capabilities, and skills
- Formal training or certification on Data Engineering concepts and 3+ years applied experience. Experience across the data lifecycle
- Advanced at SQL (e.g., joins and aggregations)
- Advanced knowledge of RDBMS like Aurora. Experience in Microservice based component using ECS or EKS
- Working understanding of NoSQL databases. 4 + years of Data Engineering experience in building and optimizing data pipelines, architectures, and data sets ( Glue or Databricks etl)
- Proficiency in object-oriented and object function scripting languages (Python etc.)
- Experience in developing ETL process and workflows for streaming data from heterogeneous data sources
- Willingness and ability to learn and pick up new skillsets
- Experience working with modern DataLakes: Databricks )
- Experience building Pipeline on AWS using Terraform and using CI/CD piplelines
Preferred qualifications, capabilities, and skills
- Experience with data pipeline and workflow management tools (Airflow, etc.)
- Strong analytical and problem-solving skills, with attention to detail.
- Ability to work independently and collaboratively in a team environment.
- Good communication skills, with the ability to convey technical concepts to non-technical stakeholders.
- A proactive approach to learning and adapting to new technologies and methodologies.
hackajob is partnering with JPMorganChase to fill this position. Create a profile to be automatically considered for this role—and others that match your experience.