Our data team has expertise across engineering analysis architecture modeling machine learning artificial intelligence and data science. This discipline is responsible for transforming raw data into actionable insights building robust data infrastructures and enabling datadriven decisionmaking and innovation through advanced analytics and predictive modeling.
Responsibilities:
- Work together with data engineers scientists and analysts to understand the needs for data and create effective data pipelines
- Solution design implement and maintain data pipelines for data ingestion processing and transformation using could services and other technologies
- Create and maintain data solutions mostly in Azure AWS or GCP (Redshift Aurora Athena Data Factory Big Query etc.) and rarely onpremise (Sql Server Oracle Postgresql)
- Implementing data validation and cleansing procedures will ensure the quality integrity and dependability of the data.
- Improve the scalability efficiency and costeffectiveness of data pipelines.
- Monitoring and resolving data pipeline problems ensuring consistency and availability of the data.
Qualifications :
- Proficient in Python and SQL
- Strong experience with Relational and ideally NoSQL databases.
- Knowledge of good data design principles (concepts such as Kimball and Star Schema) data warehousing concepts and data modelling.
- Strong understanding of ETL/ELT processes.
- Exposure to data monitoring and observability tools.
- Relevant experience with EventDriven Architecture
- Good understanding of Agile ways of working
Remote Work :
No
Employment Type :
Fulltime