Objectives of this position:
The objective of the position is to manage the extract/transform/load processes ensuring the data availability.
Responsibilities:
The holder of the position is mainly responsible for the following areas in coordination with his / her superior:
Design create modify extract/transform/load (ETL) pipelines in Azure Data Factory ensuring efficient data flow from source to destination.
Ensure data accuracy and data integrity throughout the ETL processes via data validation cleansing deduplication and error handling to ensure reliable and usable data being ingested.
Monitor the ETL processes and optimize ETL pipelines for speed and efficiency addressing bottlenecks and ensuring the ETL system can handle the volume velocity and variety of data.
Participate in data modeling designing of the data structures and schema in the data warehouse to optimize query performance and align with business needs.
Work closely with different departments and IT team to understand data requirements and deliver the data infrastructure that supports business goals.
Provide technical support for ETL systems troubleshooting issues and ensuring the continuous availability and reliability of data flows.
Ensure proper documentation of data sources ETL processes and data architecture.
Requirements:
46 years of database experience in an enterprise production environment
Working experience on Azure cloud and with Azure Data Factory and Oracle Cloud is a must Snowflake would be an advantage
Demonstrable proficiency in PL/SQL to write SQL scripts
Experience in data injection cleansing and data modelling
Hands on Database administration skills on Oracle and PostgreSQL would be an advantage
Good analytical and troubleshooting skills
Curious proactive selfmotivated and able to work in a dynamic environment
Good spoken and written communication skills in English to speak English speaking users.
ETL Engineer
Education
Degree / diploma