Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailThe main responsibilities are:
Data modeling and pipelines development with Spark on Scala in order to ingest and transform data from several sources (Kafka topics APIs HDFS structured databases).
Data transformation and quality to ensure data consistency and accuracy.
Set up CI/CD pipelines to automate deployments unit testing and development management.
The implementation of different orchestrators and scheduling processes to automate the data pipeline execution (Airflow as a service).
Modifying the existing code as per business requirements and continuously improvement to achieve a better performance and maintainability.
Ensuring the performance and security of the data infrastructure and following the best practice of Data engineering.
Contributing to production support incident and anomaly corrections and to implement functional and technical evolutions to ensure the stability of production processes
Writing technical documentation to ensure knowledge capitalization.
Qualifications :
Good knowledge of
Full understanding of
Some knowledge of *
Optionally/ as a plus *
Additional Information :
Qu podemos ofrecerte
Remote Work :
No
Employment Type :
Fulltime
Full-time