Job Summary:
Position: Senior Developer (Python PySpark Azure Databricks)
Location: Minneapolis MN (Hybrid)
Role: Key member of the data engineering team responsible for building scalable data solutions optimizing data pipelines and contributing to the overall data strategy.
Tasks: Designing and developing ETL pipelines deploying data models managing workflows ensuring data governance collaborating with other teams ensuring high performance of data platforms automating processes troubleshooting data issues and staying updated with latest industry trends.
Requirements: 5 years of experience in software or data engineering proficiency in Python PySpark SQL and Azure services expertise in Databricks experience in designing and managing ETL pipelines knowledge of cloud infrastructure and containerization strong problemsolving skills and ability to work in an agile crossfunctional environment.