Job Title: Data Engineer
Location: Specify Location or Remote
Responsibilities: Design build and maintain scalable data pipelines using PySpark on Data bricks. Work with Delta Lake to ensure data reliability consistency and scalability. Utilize Azure Synapse for integrated analytics querying and big data processing. Develop and deploy data solutions on Azure including data lakes data warehouses and data pipelines. Collaborate with data scientists analysts and other stakeholders to understand data requirements and deliver solutions. Monitor and optimize data pipelines for performance and cost efficiency. Requirements: Experience with PySpark: Handson experience in writing PySpark scripts and managing Spark clusters. Delta Lake Expertise: Understanding of Delta Lake features such as ACID transactions and schema evolution.
experience with PySpark Delta Lake Expertise