Job Details:
- Design implement and manage Azure cloud solutions for various projects.
- Develop and maintain data pipelines using Databricks (PySpark) for data ingestion processing and analysis.
- Configure and manage Azure Data Factory (ADF) to orchestrate data workflows and ETL processes.
- Implement and optimize Azure Data Lake Storage (ADLS) for efficient data storage and retrieval.
- Collaborate with crossfunctional teams to design and implement data warehouse solutions.
- Utilize Git for version control and collaboration on codebase.
- Monitor troubleshoot and optimize data processes for performance and reliability.
- Implement security best practices and manage access controls using Azure Active Directory (AAD).
- Document technical designs processes and procedures.
- Stay updated with the latest Azure cloud technologies and best practices.
Requirements experience and skills :
- Bachelors degree in Computer Science Engineering or a related field.
- Proven experience working with Azure cloud services including Azure Data Lake Storage Azure Data Factory and Azure Active Directory.
- 10 Years of experience required.
- Strong proficiency in PySpark and experience with Databricks for data engineering and analytics.
- Handson experience with Git for version control and collaboration.
- Familiarity with data warehousing concepts and technologies.
- Experience with SQL and relational databases.
- Strong analytical and problemsolving skills.
- Excellent communication and collaboration skills.
- Ability to work effectively in a fastpaced dynamic environment.
- Azure certifications (e.g. Azure Administrator Azure Data Engineer) are a plus.
Skills:
- Snowflake
- Azure Databricks
- Azure Data Factory
- Pyspark
- Python
- SQL