Save time and effort sourcing top tech talent

DataOps Engineer

Remote
Data Engineer
RedCloud
Actively hiring

Sign up for the chance to get matched to this role, and similar opportunities.

About RedCloud

RedCloud is leveraging AI-powered technology to break down the barriers to fair and profitable trade in emerging markets.

RedCloud's Intelligent Open Commerce Platform connects FMCG Brands, Distributors, and Local Merchants on a single, equitable marketplace, empowering them with real-world insights and data to help them make better decisions. RedCloud enables FMCG Brands to seize new opportunities in emerging markets, facilitates access to more buyers & streamlines operations for Distributors, and helps Local Merchants spend more time selling products, not searching for them.

The company comprises a highly diverse, dynamic team of driven talented people from over twenty different countries, speaking multiple languages, with a physical footprint in Africa, Europe, and Latin America.

 

The role:

The DataOps Engineer is responsible for the development, implementation, and maintenance of data pipelines and infrastructure. This role focuses on optimizing data flow and collection for cross-functional teams. The ideal candidate has a deep understanding of data engineering, DevOps, and software development principles, and is committed to enhancing data management processes.

 

Responsibilities:

Pipeline Development & Management:

  • Design, build, and maintain scalable and reliable data pipelines.
  • Automate data ingestion, transformation, and delivery processes.
  • Ensure data quality, integrity, and reliability throughout the data lifecycle.

Infrastructure Management:

  • Manage cloud data infrastructure.
  • Implement and optimize data storage solutions.
  • Monitor and ensure the performance, scalability, and security of data platforms.
  • Setting up alerts to detect anomalies

Collaboration & Support:

  • Collaborate with data scientists, analysts, Engineers, and other stakeholders to understand data requirements and deliver optimal solutions.
  • Provide support for data-related issues and troubleshooting.
  • Assist in the development and implementation of data governance and security policies including access management.

Continuous Improvement:

  • Implement CI/CD processes for data workflows.
  • Optimise existing processes and frameworks for better performance and cost-efficiency.
  • Stay updated with the latest industry trends and technologies to continuously improve data operations. 

 

Requirements:

Even if you don’t meet all the requirements, we encourage you to apply!

Experience:

  • 3+ years of experience in DataOps, Data Engineering, or a related role.
  • Proven experience with data pipeline tools (Airflow).
  • Hands-on experience with cloud platforms (AWS, Azure, Google Cloud) and data services (Snowflake).

Technical Skills:

  • Proficiency in SQL and experience with relational databases.
  • Strong programming skills in Python.
  • Proficient with containerization and orchestration tools (e.g. Docker, Kubernetes).
  • Experience with CI/CD tools (Terraform, Jenkins, GitHub, Concourse CI).
  • Knowledge of data warehousing and ETL/ELT processes.

Soft Skills:

  • Excellent problem-solving and analytical skills.
  • Strong communication and collaboration skills.
  • Ability to work independently and as part of a team.
  • Attention to detail and a commitment to data quality.

Sign up for the chance to get matched to this role, and similar opportunities.

Upskill

Level up the hackajob way. Verify your skills, learn brand new ones and test your ability with Pathways, our learning and development platform.

Ready to reach your potential?