Data Engineer
This role is responsible for developing data structures and pipelines aligned to established standards and guidelines to organize, collect, standardize, and transform data that helps generate insights and address reporting needs.
Job Description
Data Engineering & Pipeline Development
- Develops data structures and pipelines aligned to established standards and guidelines.
- Ensures data quality during ingest, processing, and final load to target tables.
- Creates standard ingestion frameworks for structured and unstructured data.
- Checks and reports on the quality of data being processed.
Data Consumption & Access
- Creates standard methods for end users and downstream applications to consume data, including:
- Database views
- Extracts
- Application Programming Interfaces (APIs)
- Develops and maintains information systems (e.g., data warehouses, data lakes), including data access APIs.
Platform Implementation & Optimization
- Implements solutions via data architecture, data engineering, or data manipulation on:
- On-prem platforms (e.g., Kubernetes, Teradata)
- Cloud platforms (e.g., Databricks)
- Determines appropriate storage platforms across on-prem (minIO, Teradata) and cloud (AWS S3, Redshift) based on privacy, access, and sensitivity requirements.
Data Lineage & Collaboration
- Understands data lineage from source to final semantic layer, including transformation rules.
- Enables faster troubleshooting and impact analysis during changes.
- Collaborates with technology and platform management partners to optimize data sourcing and processing rules.
Design Standards & System Review
- Establishes design standards and assurance processes for software, systems, and applications development.
- Reviews business and product requirements for data operations.
- Suggests changes and upgrades to systems and storage to accommodate ongoing needs.
Data Strategy & Lifecycle Management
- Develops strategies for data acquisition, archive recovery, and database implementation.
- Manages data migrations/conversions and troubleshooting of data processing issues.
- Applies data sensitivity and customer data privacy rules and regulations consistently in all Information Lifecycle Management activities.
Monitoring & Issue Resolution
- Monitors system notifications and logs to ensure database and application quality standards.
- Solves abstract problems by reusing data files and flags.
- Resolves critical issues and shares knowledge such as trends, aggregates, and volume metrics regarding specific data sources.
Must-Have Technical Skills
- AWS (including S3, Redshift)
- PySpark
- Databricks
Additional Technical Skills
- Big Data Architecture
- Python, SQL
- Apache Spark
- Data Modeling & Pipeline Design
- Kafka / Kinesis (Streaming)
- Apache AirFlow
- GitHub, CI/CD (Concourse preferred)
- MinIO
- Tableau
- Performance Tuning
- Jira (ticketing)
- Shell Commands
- Data Governance & Best Practices
hackajob is partnering with Comcast to fill this position. Create a profile to be automatically considered for this role—and others that match your experience.