Job description Based on recruiter search
Roles and Responsibilities :
- Design develop test deploy and maintain largescale data pipelines using AWS services such as S3 Lambda Step Functions etc.
- Develop ETL processes using PySpark on Snowflake to extract insights from various sources.
- Collaborate with crossfunctional teams to gather requirements and design solutions that meet business needs.
- Ensure high availability and scalability of the system by implementing monitoring tools like CloudWatch and Logz.io.
- Troubleshoot issues related to data processing workflows and provide timely resolutions.
Job Requirements :
- Strong proficiency in Python programming language with handson experience working with libraries like boto3 or sagemaker.
- Experience working with bigdata technologies like Spark (PySpark) and SQL databases (e.g. Snowflake).
- Familiarity with Azure Data Lake Storage (ADLS) Event Hubs Databricks is an added advantage.
Job Title: Hi Applicants!!! Hiring for a Job in a Reputed Organization(Product and Service based company). Here is a Gateway to it through ALP Consulting. Recruiting Employment Type: Permanent Experience: Skills Required: Excellent Communication Skills Strong Experience in : Job Location: Pan India Note: Maximum 60 Days Notice Period will be Prioritized.