Save time and effort sourcing top tech talent

Data Ops Engineer

Annapolis, MD, USA
Data Engineer Cloud Engineer Cloud Architect Data Architect
Expression
Actively hiring

Sign up for the chance to get matched to this role, and similar opportunities.

Expression is seeking a skilled Data Ops Engineer to join our team in Annapolis, MD on a hybrid role. As a Data Ops Engineer you will play a crucial role in bridging our data and infrastructure teams. This strategically designed position offers you the chance to tackle challenging tasks that will foster your professional growth and development. You will be at the forefront of ensuring our data systems are reliable, efficient, and scalable, enabling seamless data flows that empower data-driven decision-making for our clients. This is an opportunity to grow your career while working on challenging projects that make a real impact. You will be a vital part of a collaborative environment that encourages innovation, learning, and professional development.

Responsibilities:

1. Data Infrastructure Management: Design, implement, and maintain robust data infrastructure, including databases, data warehouses, and data lakes, to support our rapidly expanding data landscape. You will engage and lead initiatives that enhance our data capabilities and drive innovation.

 

2. ETL Pipeline Development and Testing: Develop, deploy, and test ETL pipelines for extracting, transforming, and loading data from various sources. You will ensure data quality and integrity, playing a key role in the accuracy of our analytical insights.

3. Machine Learning Model Integration: Collaborate with data scientists and data engineers to integrate and test machine learning models within our data systems, ensuring smooth functionality and high performance. This aspect of the role provides an exciting opportunity to work at the intersection of data engineering and machine learning.

4. Automation and Orchestration: Implement cutting-edge automation and orchestration tools to streamline data operations, minimize manual processes, and boost efficiency. Your contributions will significantly enhance our operational capabilities.

 

4. Performance Optimization: Continuously assess and optimize data pipelines and infrastructure for performance, scalability, and cost-effectiveness. You will identify and resolve bottlenecks, ensuring that our systems can handle growing demands.

5. Monitoring and Alerting: Establish proactive monitoring and alerting mechanisms to detect and address potential issues in real time. Your vigilance will help maintain high availability and reliability of our data systems.

6. Collaboration: Work closely with cross-functional teams—including data scientists, analysts, and software engineers—to understand evolving data requirements. You will be instrumental in delivering tailored solutions that align with business objectives.

7. Documentation and Knowledge Sharing: Create comprehensive documentation of data infrastructure, pipelines, and processes. Help us promote a culture of continuous improvement by sharing knowledge and best practices within the team.

Requirements:

  • Top Secret with capability to obtain a CI Poly
  • Security+ certification (or willingness to get certified within the first month)
  • Associates degree or higher in engineering, computer science, or related field and 5+ years of experience as a DevOps/Cloud/Software engineer -OR- 8+ years of experience as a DevOps/Cloud/Software engineer
  • Proficiency in programming languages such as Python, Java, or Scala.
  • Strong experience with relational databases (e.g., PostgreSQL, MySQL) and big data technologies (e.g., Hadoop, Spark).
  • Experienced with Elasticsearch and Cloud Search.
  • Hands-on experience with cloud platforms such as AWS, Azure, or Google Cloud Platform.
  • Experience with data pipeline orchestration tools (e.g., Airflow, Luigi) and workflow automation tools (e.g., Jenkins, GitLab CI/CD).
  • Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes) is a plus.
  • Data pipeline management
  • Proven experience maintaining production systems for external customers
  • Experience working with Open Source Technologies such as Red Hat (OpenShift) and Linux/Unix
  • Engaging with Data engineers in troubleshooting issues
  • Excellent problem-solving skills and attention to detail.
  • Strong communication and collaboration skills.

Salary Range:

  • 100,000 - 160,000 depending on factors such as experience level, locality pay and the remote/hybrid/on-site schedule

Sign up for the chance to get matched to this role, and similar opportunities.

Upskill

Level up the hackajob way. Verify your skills, learn brand new ones and test your ability with Pathways, our learning and development platform.

Ready to reach your potential?