Vacancy title:
Data Engineer (Platform)
[ Type: FULL TIME , Industry: Consulting , Category: Science & Engineering ]
Jobs at:
Zembo
Deadline of this Job:
Monday, March 03 2025
Duty Station:
Within Uganda , Kampala, East Africa
Summary
Date Posted: Monday, February 17 2025, Base Salary: Not Disclosed
Similar Jobs in Uganda
Learn more about Zembo
Zembo jobs in Uganda
JOB DETAILS:
About Us
We are Zembo, the start-up paving the way to the e-mobility revolution in Africa. Zembo sells electric motorcycle taxis and offers a battery swap service via a network of stations. After six years of operation, Zembo is the most experienced African provider of electric motorcycles and battery swaps on the continent. We are scaling up in Uganda, providing an affordable and environmentally responsible mobility solution.
About the Role
We are looking for a Senior Data Engineer with strong platform expertise to design, build, and maintain scalable data pipelines while also managing the infrastructure that supports them. As a small but fast-growing team, we need someone who can wear both hats—developing ETL/ELT pipelines while ensuring our data platform is reliable, cost-efficient, and future-proof.
This role is critical to optimizing our IoT data ingestion, analytics, and business intelligence across the company. If you love working with data pipelines, cloud infrastructure, and automation, this is a great opportunity to make a high-impact contribution.
Responsibilities
Data Engineering;
• Design, build, and maintain scalable data pipelines for analytics, BI, and operational decision-making.
• Develop and optimize ETL/ELT processes to move data from IoT devices, APIs, and databases.
• Ensure data integrity, quality, and governance across different sources.
• Improve query performance and optimize data access for BI tools like Metabase.
• Support real-time and batch data processing using tools like Apache Kafka, Spark, or Flink.
Platform Engineering
• Design and manage the data platform infrastructure, ensuring scalability, reliability, and cost-effectiveness.
• Automate data pipeline orchestration using tools like Airflow, Dagster, or Prefect.
• Own and optimize data storage solutions, including data lakes (S3, Delta Lake) and data warehouses (BigQuery, Snowflake, MariaDB).
• Implement monitoring, logging, and alerting to track data pipeline health (Grafana, Prometheus).
• Optimize data workflows through DevOps and Infrastructure-as-Code (Terraform, Kubernetes, Docker).
• Enable self-service data tooling for analysts, engineers, and business teams
Requirements
Desired
• Deep knowledge of data warehousing and ETL/ELT patterns.
• Hands-on experience with data pipeline orchestration tools (Airflow, Dagster, Prefect).
• Strong understanding of cloud platforms (AWS, GCP, Azure) and storage systems.
• Experience with Kubernetes, Terraform, and CI/CD pipelines for data workflows.
• Familiarity with streaming data technologies (Kafka, Pulsar, MQTT).
• Ability to work in a fast-moving, high-ownership environment with minimal supervision.
• Exposure to BI tools like Metabase, Looker, or Tableau.
Nice to have
• Experience with IoT data pipelines (InfluxDB, MQTT, TimescaleDB).
• Knowledge of data governance, access control, and security best practices.
• Background in monitoring and optimizing high-scale data platforms.
Education Requirement: No Requirements
Job Experience: No Requirements
Work Hours: 8
Experience in Months:
Level of Education:
Job application procedure
Interested and qualified, Click here to apply.
All Jobs
QUICK ALERT SUBSCRIPTION