Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailJob Description:
Design develop and maintain ETL pipelines using Databricks PySpark and ADF to extract transform and load data from various sources.
Must have good skills in Pyspark Programming code remediation etc.
Must have good working experience on Delta tables deduplication merging with terabyte of data set
Optimize and finetune existing ETL workflows for performance and scalability. 2 to 3 years of experience in ADF is desirable (Medium expertise require)
Must have experience on working with large data set
Proficient in SQL and must worked on complex joins Subqueries functions procedure
One should be selfdriven and work independently without support
Full Time