As a member of the Data Engineering team your role will involve supervising the maintenance and enhancement of data pipelines on the Databricks platform. Your main tasks will cover handling emerging business needs refining ETL processes and ensuring the seamless progression of energy data across our systems.
Tasks
- Design create and maintain a strong and scalable data infrastructure on Databricks with AWS.
- Work closely with the ETL and front teams to integrate data sources and create a unified data model.
- Ensure the availability integrity and performance of data pipelines.
- Collaborate with crossfunctional teams to identify opportunities for datadriven enhancements and insights.
- Monitor and analyze platform performance identify bottlenecks and recommend improvements.
- Develop and maintain technical documentation for ETL implementations.
- Stay current with the latest Databricks/spark features and best practices and contribute to the continuous improvement of our data management capabilities.
Requirements
- 5 years experience as a Data Engineer with previous experience in implementing pipelines in Databricks.
- Strong expertise in using Spark
- Proficient in SQL and scripting languages (e.g. Python).
- An experience in cloud environments AWS is welcomed
- Excellent analytical and problemsolving skills.
- Fluency in English is essential.
- Ability to work effectively in a fastpaced collaborative environment.
- Detailoriented and able to prioritize tasks.
- Adaptability and willingness to learn new technologies and tools.
- You are strongly customer oriented.
Benefits
Location: Geneva
Remote: 2 days per week.
Start date: ASAP