What will I be doing
- Design and implement data models & pipelines based on business and functional requirements.
- Extract transform and load (ETL) data from various sources into Power BI ensuring data quality and integrity.
- Monitor and troubleshoot performance issues related to ETL.
- Familiarity with programming languages such as Python SQL PySpark etc.
- Extract transform and load logic to automate data collection and manage data process/pipelines. This includes data quality and monitoring
- Solution design using MS Azure Services (Azure Data factory Azure Databricks) and other Cloud technologies.
- Create and maintain documentation for data processes and reporting solutions.
Qualifications
What do I need to bring with me
- Bachelor s degree in computer science Information Technology Data Science or a related field. Master of Business degree preferred.
- At least 35years handson work experience as a Data Engineer or in a similar role with a focus on Power BI with strong proficiency in Power BI including DAX Power Query and data modelling.
- Working experience of DevOps and CI/CD principle (e.g. Azure DevOps Octopus Deploy GitLab)
- Experience in various forms of data modelling of relational schema including both physical and conceptual modelling.
- Experience with SQL and relational databases along with Knowledge of ETL processes and tools.
- Excellent problemsolving skills along with strong communication skills with the ability to present complex data insights to nontechnical stakeholders.
- Ensure reliability and accuracy of data pipelines and data models and provide effective diagnostics tools for troubleshooting production issues.
- Strong analytical and troubleshooting skills with a proactive approach to identifying and resolving issues.
- Ideally some knowledge and experience in the domains of Finance/Accounting Sales/Marketing.
dax,pyspark,azure,sql,python,etl,power bi