Position Data Engineer
Contract36 months
Experience 4 Years
Location Remote (23 Days travel Gurgaon to client office on monthly basis)
Budget 90 KPMGST
Mandatory Skills; Pyspark Azure ADF Databricks ETL SQL
Key Responsibilities
- Design and implement robust data pipelines for processing and integrating data from various sources.
- Develop and maintain data models and metadata ensuring data integrity and reliability.
- Utilize Azure Data Factory for orchestrating data workflows and automating data movement.
- Collaborate with data scientists and analysts to understand and translate requirements into technical specifications.
- Monitor system performance and troubleshoot datarelated issues to enhance data quality.
- Integrate diverse data sources including onpremises databases and cloud storage.
- Implement security measures to protect sensitive data and ensure compliance with data governance policies.
- Conduct data profiling and analysis to identify trends and anomalies in datasets.
- Utilize Azure Databricks for largescale data processing and data preparation.
- Create and manage Azure SQL Database instances for storing processed data.
- Generate insights through data visualization techniques using Power BI or similar tools.
- Development of ETL processes to facilitate ongoing data integration efforts.
- Ensure documentation of data architecture and processes for future reference and enhancements.
- Engage in continuous improvement practices recommending solutions for optimizing data workflows.
- Stay updated on Azure platform capabilities and industry trends to leverage new tools and features effectively.
Required Qualifications
- Bachelors degree in Computer Science Information Technology or a related field.
- Minimum of 3 years experience in data engineering or a related role.
- Handson experience with Microsoft Azure services particularly Azure Data Factory and Azure SQL Database.
- Proficiency in programming languages such as Python or SQL for data manipulation.
- Experience with ETL tools and processes in cloud environments.
- Strong knowledge of data modeling concepts and methodologies.
- Familiarity with data warehousing solutions and architectures.
- Understanding of big data technologies like Hadoop or Spark is a plus.
- Experience with data visualization tools like Power BI or Tableau.
- Knowledge of data governance and compliance requirements.
- Excellent problemsolving and analytical thinking abilities.
- Strong communication skills for collaboration with crossfunctional teams.
- Ability to work independently and manage multiple priorities effectively.
- Certification in Azure Data Engineering or related credentials is preferred.
- Demonstrated capability to adapt to changing technologies and industry trends.
sql,azure,python,power bi,etl,adf,databricks,data governance,adfs,pyspark,data modeling