Join our team to work on enhancing a robust data pipeline that powers our SaaS product ensuring seamless contextualization validation and ingestion of customer data. Collaborate with product teams to unlock new user experiences by leveraging data insights. Engage with domain experts to analyze realworld engineering data and build data quality solutions that inspire customer confidence. Additionally identify opportunities to develop selfservice tools that streamline data onboarding and make it more accessible for our users.
Our Scandinavianbased client was established with the mission to fundamentally transform the of capital projects and operations. Designed by industry experts for industry experts Clients platform empowers users to digitally search visualize navigate and collaborate on assets. Drawing on 30 years of software expertise and 180 years of industrial legacy as part of the renowned Scandinavian business group Client plays an active role in advancing the global energy transition. The company operates from Norway the UK and the U.S.
Key Responsibilities:
- Design build and maintain data pipelines using Python
- Collaborate with an international team to develop scalable data solutions
- Conduct indepth analysis and debugging of system bugs (Tier 2
- Develop and maintain smart documentation for process consistency including the creation and refinement of checklists and workflows
- Set up and configure new tenants collaborating closely with team members to ensure smooth onboarding
- Write integration tests to ensure the quality and reliability of data services
- Work with Gitlab to manage code and collaborate with team members
- Utilize Databricks for data processing and management
Requirements:
- Programming: Minimum of 34 years as data engineer or in a relevant field
- Python Proficiency: Advanced experience in Python particularly in delivering productiongrade data pipelines and troubleshooting codebased bugs.
- Data Skills: Structured approach to data insights
- Cloud: Familiarity with cloud platforms (preferably Azure)
- Data Platforms: Experience with Databricks Snowflake or similar data platforms
- Database Skills: Knowledge of relational databases with proficiency in SQL
- Big Data: Experience using Apache Spark
- Documentation:Experience in creating and maintaining structured documentation
- Testing: Proficiency in utilizing testing frameworks to ensure code reliability and maintainability
- Version Control:Experience with Gitlab or equivalent tools
- English Proficiency: B2 level or higher
- Interpersonal Skills: Strong collaboration abilities experience in an international team environment willing to learn new skills and tools adaptive and exploring mindset
Nice to have:
- Experience with Docker and Kubernetes
- Experience with document and graph databases
- 3D Modeling: Familiarity with 3D matching and model refinement processes is a plus
- Ability to travel abroad twice a year for an onsite workshops
We offer:
- Flexible working format remote officebased or flexible
- A competitive salary and good compensation package
- Personalized career growth
- Professional development tools (mentorship program tech talks and trainings centers of excellence and more)
- Active tech communities with regular knowledge sharing
- Education reimbursement
- Memorable anniversary presents
- Corporate events and team buildings
- Other locationspecific benefits