We are seeking a skilled Data Engineer to design build and maintain scalable data pipelines and infrastructure. You will play a crucial role in our data ecosystem by working with cloud technologies to enable data accessibility quality and insights across the organization. This role requires expertise in Azure Databricks Snowflake and DBT. Requirements: Bachelors in Computer Science Data Engineering or related field. Proficiency in Azure Databricks for data processing and pipeline orchestration. Experience with Snowflake as a data warehouse platform and DBT for transformations. Strong SQL skills and understanding of data modeling principles. Ability to troubleshoot and optimize data workflows. *Responsibilities for Internal Candidates Key Responsibilities: Data Pipeline Development: Design build and optimize data pipelines to ingest transform and load data from multiple sources using Azure Databricks Snowflake and DBT. Data Architecture: Develop and manage data models within Snowflake ensuring efficient data organization and accessibility. Data Transformation: Implement transformations in DBT standardizing data for analysis and reporting. Performance Optimization: Monitor and optimize pipeline performance troubleshooting and resolving issues as needed. Collaboration: Work closely with data scientists analysts and other stakeholders to support datadriven projects and provide access to reliable wellstructured data. Qualifications: Having relevant Experience in MS Azure Snowflake DBT& Big Data Hadoop ecosystem components Understanding of Hadoop Architecture and underlying framework including Storage Management. Strong understand and implementation experience in Hadoop Spark Hive/Databricks Expertise in implementing Data lake solution using Scala as well as Python. Expertise with orchestration tool like Azure Data Factory Strong SQL and Programing skills Experience with DataBricks is desirable Understanding / Implementation experience with CICD tools such as Jenkins Azure DevOps GITHUB is desirable.