hackajob is partnering with Verisk Analytics to fill this position. Create a profile to be automatically considered for this role—and others that match your experience.
Verisk is seeking a Level 2 Data Developer to contribute to the design, development, and operation of data pipelines and data services that support next‑generation products for the life and annuity insurance market. At this level, you are a developing individual contributor who can deliver well‑defined data solutions with guidance, collaborate effectively with Product Owners, and senior engineers, and continue to build strong foundations in data engineering best practices. You are gaining experience working in production systems and supporting data pipelines, datasets, and operational processes. This role is office-based from either our Holmdel, NJ location or our Jersey City, NJ global headquarters, which both have a flexible hybrid work model.
Data Engineering & Development
Develop and maintain data pipelines using cloud‑native data platforms and services (e.g., AWS‑based architectures, Snowflake, Databricks) following established patterns and team standards.
Write SQL transformations and data models that support analytics, reporting, and downstream applications.
Implement data ingestion, transformation, and enrichment logic from multiple data sources with guidance.
Contribute to building and supporting data APIs or data services that expose curated datasets.
Participate in technical design discussions and implement solutions under the direction of senior team members.
Contribute to the design and implementation of OLAP-friendly datasets (e.g., dimensional models, star/snowflake schemas) to support analytics and reporting.
Assist in building and maintaining curated analytical layers / data marts optimized for BI and downstream consumption.
Data Quality, Reliability & Operations
Support data quality for assigned pipelines, including validation, reconciliation, and monitoring activities.
Assist in identifying performance, scalability, and reliability issues and contribute to fixes.
Troubleshoot and resolve production data issues with guidance and escalation when needed.
Contribute to operational readiness through logging, alerting, and documentation.
Collaboration & Agile Delivery
Work closely with Product Owners and team members to understand requirements and acceptance criteria.
Translate user stories into technical tasks and implementation steps with support.
Collaborate with to productionize features, datasets, and model inputs.
Actively participate in sprint planning, estimation, standups, and retrospectives.
Security, Compliance & Governance
Follow established data governance, privacy, and regulatory standards in all data solutions.
Handle sensitive insurance data according to compliance guidelines and best practices.
Contribute to documentation related to data access, lineage, and standards.
Experience
2–4 years of experience as a Data Developer, Data Engineer, or Software Engineer with a data focus.
Experience developing or supporting production data pipelines.
Experience working in Agile, product‑oriented teams.
Technical Skills
Proficiency in SQL, including joins, aggregations, and moderately complex transformations.
Experience with at least one data‑oriented programming language (e.g., Python, .Net).
Familiarity with ETL/ELT frameworks and modern data architectures.
Exposure to building or consuming APIs and data services
Basic understanding of data modeling concepts and trade‑offs.
Data & Platform Knowledge
Experience working with cloud‑based data platforms, data lakes, or data warehouses e.g., (BigQuery, Amazon Redshift, Snowflake, Databricks).
Familiarity with version control and basic CI/CD concepts.
Awareness of analytics and machine learning data needs (e.g., data consistency and freshness).
Preferred Qualifications
Exposure to insurance or financial services data, especially life and annuity domains.
Experience supporting
Familiarity with data governance or metadata concepts.
Exposure to data visualization or BI tools – Sigma, Tableau, PowerBI, ThoughtSpot
Level Expectations (L2)
At this level, you are expected to:
Deliver well‑defined data pipelines or components with guidance.
Apply established patterns, tools, and standards effectively.
Grow technical depth in SQL, data modeling, and data pipeline development.
Balance development speed with code quality and reliability.
Be a collaborative and dependable team member while continuing to develop independence.
Why Join Us?
This role offers the opportunity to grow as a data engineer while contributing to data‑driven products for the life insurance industry within a mature, data‑first organization. You’ll gain hands‑on experience with real production data, learn from experienced engineers, and build a strong foundation for progression into senior and lead data roles.
#LI-LM03
#LI-Hybrid
hackajob is partnering with Verisk Analytics to fill this position. Create a profile to be automatically considered for this role—and others that match your experience.
Level up the hackajob way. Verify your skills, learn brand new ones and test your ability with Pathways, our learning and development platform.