Save time and effort sourcing top tech talent

Senior Data Engineer at Cargill

Bengaluru, IN
Python Developer Platform Engineer Data Engineer Java Developer Full Stack Java Developer Full Stack Python Developer
Actively hiring

Senior Data Engineer at Cargill

Archer
Bengaluru, IN
Python Developer Platform Engineer Data Engineer Java Developer Full Stack Java Developer Full Stack Python Developer
Archer
Actively hiring

hackajob is partnering with Archer to fill this position. Create a profile to be automatically considered for this role—and others that match your experience.

 

Job Purpose and Impact

We are seeking a highly experienced Senior Engineer with strong expertise across API engineering, platform development, and data engineering. The ideal candidate will design, build, and optimize scalable services, data pipelines, and digital platforms that power enterprise applications. This role requires deep technical proficiency, strong architectural thinking, and the ability to collaborate with cross-functional teams to deliver high-quality, reliable, and secure solutions.

Key Accountabilities

Core Responsibilities

API & Platform Engineering

  • Design and develop high‑performance, secure, and scalable APIs using Java, Spring, and Hibernate.
  • Build and maintain microservices-based architectures ensuring robustness, modularity, and efficiency.
  • Engineer and support digital platform components, infrastructure, and foundational services.
  • Implement and optimize CI/CD pipelines, automated deployments, and cloud-native release processes.
  • Integrate with API Gateways and manage API lifecycle management, versioning, logging, and monitoring.
  • Troubleshoot production issues, ensure service reliability, and provide ongoing technical support.
  • Write unit, integration, and performance tests; participate in peer code reviews to ensure code quality.
  • Maintain comprehensive technical documentation, architectural diagrams, and configuration details.

Data Engineering

  • Design, automate, and optimize scalable data pipelines for batch and real-time ingestion, transformation, and aggregation.
  • Develop and maintain ETL/ELT workflows using AWS Glue, Python, SQL, and other cloud-native tools.
  • Support migration and integration across multiple data platforms like Hadoop, Snowflake, AWS, and Oracle.
  • Implement data modeling, data warehousing, and performance optimization strategies.
  • Monitor, troubleshoot, and resolve issues across data workflows ensuring reliability and data integrity.
  • Contribute to engineering best practices, code reviews, and CI/CD enhancements for data processes.
  • Partner with analysts, data scientists, and business stakeholders to understand requirements and deliver scalable solutions.

Qualifications

    • Bachelor’s degree in Computer Science, Engineering, or a related technical field with minimum 9 years of work experience.
    • Strong expertise in Java, Spring Framework, Hibernate
    • Hands-on experience with Python and scalable data processing
    • Proficiency with AWS services, API Gateway, Lambda, EC2, S3, IAM
    • Experience in CI/CD (GitLab, Jenkins, CodePipeline, or similar)
    • Cloud-native logging & monitoring tools (e.g., Datadog)
    • SQL and advanced data transformation skills
    • Experience with Snowflake, Hadoop ecosystem, AWS Glue
    • Strong understanding of data modeling, warehousing, and performance tuning
    • Familiarity with Oracle and BI tools such as Tableau

    Architecture & Integration

    • Microservices architecture and API lifecycle management
    • Real-time and batch data pipeline integration
    • Strong understanding of distributed systems and scalable design

hackajob is partnering with Archer to fill this position. Create a profile to be automatically considered for this role—and others that match your experience.

 

Upskill

Level up the hackajob way. Verify your skills, learn brand new ones and test your ability with Pathways, our learning and development platform.

Ready to reach your potential?