Data Engineer (Cloud & Containerisation)
We are looking for a Data Engineer to join our consulting team. This role is crucial to driving migration operations, optimising CI/CD workflows, and containerising data pipelines for our clients. As a Data Engineer, you will be responsible for developing scalable solutions while working on innovative cloud and data projects.
Our ideal candidate thrives in a dynamic environment, demonstrates initiative, and has deep expertise in data engineering, cloud technologies, and containerisation.
Key Responsibilities:
- Collaborate with the Lead Data Engineer to design and implement migration strategies for large-scale data infrastructure.
- Build and maintain CI/CD workflows, ensuring automation and reliability in data pipeline operations.
- Lead containerisation efforts using Docker and manage image repositories.
- Develop and maintain Python applications, focusing on ETL/ELT processes and data modelling.
- Orchestrate ETL workflows using Airflow and integrate them with Kubernetes.
- Ensure efficient data delivery and reliability across cloud-based environments.
Qualifications & Skills:
- Proficient in SQL (Data modelling, ETL, ELT).
- Hands-on experience with Python (Flask/Django, OOP).
- Expertise in Docker (Image repository management, container builds).
- Solid understanding of CI/CD workflows (Git).
- Experience with Airflow (ETL pipeline management)
Nice to Have:
- Experience with Nifi (HTTP, Kafka data integration).
- Knowledge of Kafka (Consumer/producer debugging, testing).
- Familiarity with Spark SQL (Query optimisation).
- Experience with Kubernetes (Operations, deployment).
- MLOps knowledge (Data science workflow automation).