- Distributed Systems & Streaming Data Engineering. You have experience designing and maintaining large scale distributed systems and streaming data pipelines — ideally with Kafka, Red Panda, or similar — to process unstructured data in near real time.
- Multi Language Programming Proficiency. You’re comfortable working in both legacy C/C++ codebases for debugging and extensions, as well as developing clean new services and tools in Python/C#/Java/Go/TypeScript etc.
- Cloud Architecture & DevOps Enablement. You’ve built and migrated applications to cloud platforms like AWS (GCP a plus), containerized legacy systems, implemented high availability architectures with load balancing/failover, and worked across relational and modern NoSQL/vector databases.
- Applied AI/ML & NLP Integration. You have hands on familiarity with integrating NLP features into production using libraries such as spaCy or ML classifiers, delivering capabilities like entity extraction, classification, sentiment analysis, and setting up feedback loops to improve models.
- End to End Technical Ownership & Agile Delivery. You thrive in taking end to end ownership of projects — from gathering requirements to architecture, coding, testing, deployment — using agile practices and CI/CD pipelines to break down complex goals into actionable steps.
- Communication & Technical Leadership. You excel at explaining technical concepts to non technical stakeholders across regions, mentoring junior engineers, guiding support teams on best practices, and producing clear technical documentation that ensures knowledge is shared effectively.
- Strategic Problem Solving in “Brownfield” Contexts. You know how to modernize mission critical legacy systems while safeguarding service stability — balancing innovation such as cloud native rebuilds or GenAI integration with the realities of established production environments.
Education
Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent practical experience). Advanced degree preferred but not required.
Responsibilities
Own and modernize a mission critical AI driven data platform, delivering scalable, reliable, and future ready solutions for Moody’s Data Estate team.
- Full PD-Deliver Takeover & System Mastery (0–3 months): Quickly learn the ins and outs of the PD-Deliver platform, including its C-based codebase and custom libraries, MySQL configuration store, and dependencies on internal/external APIs.
- Support & Enhance Current Workflows for Key User Teams (3–6 months): Engage with the M&A, FDI, and other stakeholder teams to understand their usage of PD-Deliver’s “Ledger” UI and workflow.
- Infrastructure Assessment & Cloud Migration Plan (by ~6–9 months): Conduct a thorough evaluation of PD-Deliver’s infrastructure, which is currently hosted on GCP in a Linux environment and tightly integrated with legacy components.
- Modernize and Refactor Core System (start by ~9–12 months): Begin execution of the migration/refactoring plan. Focus on leveraging Moody’s modern data pipeline and tools as you rebuild parts of PD-Deliver: for example, integrating with the new Red Panda/Kafka-based content streaming pipeline (replacing the older feed mechanism), and rewriting or optimizing components of the system.
- Integrate Generative AI and Agentic Automation (within 12 months): Elevate PD-Deliver by embedding GenAI capabilities and agentic code generation into its workflow.
- Ensure High Uptime and Redundancy (ongoing, goal by 12–18 months): Architect and implement solutions for near-100% uptime and robust failover.
- Autonomy: you will operate with a high degree of independence as a senior individual contributor, owning design, implementation, and delivery decisions.
About the team
The Data Estate Enrichment team is responsible for transforming large scale unstructured data into trusted, actionable insights across Moody’s. The team combines advanced software engineering, data pipelines, and AI/ML techniques to enrich and distribute content used by hundreds of internal stakeholders globally.
By joining this team, you will play a pivotal role in modernizing a high visibility platform at the intersection of data, cloud infrastructure, and artificial intelligence, helping shape Moody’s next generation of data and agentic AI capabilities