Roles & Responsibilities: Lead the effort to design build and configure PySpark applications acting as the primary point of contact. Collaborate with crossfunctional teams to ensure timely delivery of highquality solutions. Develop and deploy PySpark applications utilizing best practices and ensuring adherence to coding standards. Provide technical guidance and mentorship to junior team members fostering a culture of continuous learning and improvement. Stay updated with the latest advancements in PySpark and related technologies integrating innovative approaches for sustained competitive advantage. Professional & Technical Skills: Must To Have Skills: Proficiency in PySpark. Good To Have Skills: Experience with Hadoop Hive and other Big Data technologies. Strong understanding of distributed computing principles and data processing frameworks. Experience with data ingestion transformation and storage using PySpark. Solid grasp of SQL and NoSQL databases .