hackajob is partnering with JPMorganChase to fill this position. Create a profile to be automatically considered for this role—and others that match your experience.
DESCRIPTION:
Duties: Review, understand, code, optimize, and automate existing one-off data transformation pipelines into discrete, scalable tasks. Plan, design, and implement data transformation pipelines and monitor operations of the data platform in a production environment. Collaborate with internal clients and service delivery engineers to identify data needs and intended workflows, and troubleshoot to find workable solutions. Gather, analyze, and document detailed technical requirements to design and implement solutions, and disseminate information to guide other engineers. Contribute code to the underlying infrastructure, software development kits, and platforms being built to support bespoke data transformation pipelines and enable predictive models to be produced and run at scale. Identify engineering opportunities to optimize operational effort and running costs of the data platform. Mentor junior engineering staff and provide guidance on day- to-day code development work.
QUALIFICATIONS:
Minimum education and experience required: Bachelor's degree in Computer Science, Information Technology, Software Engineering, Mathematics, or related field of study plus 5 years of experience in the job offered or as Software Engineer, Data Engineer/Developer, or related occupation.
Skills Required: This position requires 5 years of experience with the following: Designing and implementing scalable ETL pipelines to process structured and semi-structured data. This position requires 3 years of experience with the following: Processing data across distributed environments using Apache Spark on Big Data ecosystems such as Cloudera or Hortonworks; Building distributed data processing workflows using Scala, Python, and Java on Spark; Supporting real-time and batch data ingestion, data cleansing and transformation, and feature extraction on Spark; Managing large-scale data lake tables in Parquet and Avro formats; Implementing low- latency, scalable data operations and supporting real-time lookups, updates, and analytics using Apache HBase and Apache Cassandra. This position requires 2 years of experience with the following: Implementing ACID-compliant data operations and enabling schema evolution using Delta table structures; Implementing partitioning within Hadoop-based architectures; Configuring and maintaining Grafana dashboards integrated with Prometheus, Elasticsearch, or CloudWatch to monitor pipeline performance, API services, and system health in real time; Documenting data workflows, Spring Boot API specifications, CI/CD processes, Grafana configurations, and cloud architecture using Confluence. This position requires 1 year of experience with the following: Creating and deploying RESTful APIs using Spring Boot in Docker containers to deliver processed data access and operational insights; Managing source code to maintain structured development workflows, version control, and team collaboration using Git with GitHub and Bitbucket; Building, deploying, and managing scalable data engineering pipelines and analytics infrastructure using Azure Data Factory, Databricks, or AWS tools such as EC2, S3, EMR, Lambda, Glue, IAM, or CloudWatch.
Job Location: 8181 Communications Pkwy, Plano, TX 70524.
ABOUT USWe offer a competitive total rewards package including base salary determined based on the role, experience, skill set and location. Those in eligible roles may receive commission-based pay and/or discretionary incentive compensation, paid in the form of cash and/or forfeitable equity, awarded in recognition of individual achievements and contributions. We also offer a range of benefits and programs to meet employee needs, based on eligibility. These benefits include comprehensive health care coverage, on-site health and wellness centers, a retirement savings plan, backup childcare, tuition reimbursement, mental health support, financial coaching and more. Additional details about total compensation and benefits will be provided during the hiring process.
We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation.
JPMorgan Chase & Co. is an Equal Opportunity Employer, including Disability/Veterans
hackajob is partnering with JPMorganChase to fill this position. Create a profile to be automatically considered for this role—and others that match your experience.
Level up the hackajob way. Verify your skills, learn brand new ones and test your ability with Pathways, our learning and development platform.