Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailData Pipeline Engineering
In this role youll design and implement complex data pipelines by continuously aligning with data scientists and business requirements. The pipelines are designed with a focus on simplicity and maintainability by assuring correctness and performance. Our pipelines are currently running on top of Apache Airflow with several extracted components (external REST services cloud services or docker containers).
Query Performance Analysis & Optimization
To provide our clients with the required insight large quantities of data must be analysed and processed in realtime. Much of the data processing heavy lifting is performed using Google BigQuery. Youll be asked to help optimizing complex SQL queries and analyse these queries to understand performance bottlenecks.
Cloud Engineering
We strive to keep the operational efforts at a minimum and therefore use a large array of cloud services mostly on top of Google Compute Platform. In your role youll configure monitor and test cloud services to continuously evolve the product and make best use of available services.
Site Reliability Engineering
A big proportion of this role will also be dedicated to the area of site reliability engineering. Youll propose design and implement monitoring strategies with the aim to quickly detect issues in our data products. Additionally youll work together with the team on implementing corrective measure to keep the impact of data issues at a minimum.
Continuous Innovation
Being a small startup we are continuously challenged to find smart and efficient ways to solve complex problems. Youll help the team to identify new approaches to tackle challenging product requirements and/or to help us to delivery new features quicker and with higher quality.
Qualifications :
4 years experience working as a software or data engineer
Ability to design & build both maintainable as well as reliable Python code.
English fluency
Abillity to design and implement of complex SQL queries
Experience building data processing pipelines in Python (Apache Airflow).
Experience with test driven development & writing well tested software applications.
Experience with big data solutions offered by major cloud providers (esp. Google Cloud).
Experience with building critical software components with high uptime guarantees is a plus
Experience with Apache Kafka or related stream processing frameworks is a plus.
Additional Information :
Due to legal reasons we are obliged to disclose the minimum salary according to the collective agreement for this position which is 2.691 gross per month. However our attractive compensation package is based on marketoriented salaries and is therefore significantly above the stated minimum salary.
As an employer we value diversity and support people in developing their potential and strengths realizing their ideas and seizing opportunities. We believe passionately that employing a diverse workforce is central to our success. We welcome applications from all members of society irrespective of age skin colour religion gender sexual orientation or origin.
Remote Work :
No
Employment Type :
Fulltime
Full-time