Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailNot Disclosed
Salary Not Disclosed
Looking for Data Engineer with 6 years of work experience on building and maintaining data pipelines.
Candidate should have relevant experience working on Databricks AWS/Azure data storage technologies such as databases and distributed file systems and be familiar with Spark framework. Retail experience will be an added advantage.
Responsibilities:
Design develop enhance and maintain scalable ETL pipelines to process large volumes of data from various sources
Implement and manage data integration solutions using tools like Databricks Snowflake and other relevant technologies
Develop and optimize data models and schemas to support analytics and reporting needs
Write efficient and maintainable code in Python for data processing and transformations Utilize Apache Spark for distributed data processing and largescale data analytics
Work on converting business requirements into technical solutions
Ensuring data quality and integrity through unit testing
Collaborating with crossfunctional teams to integrate data pipelines with other systems
Technical Requirements:
Proficiency in Databricks for data integration and processing
Experience with ETL tools and processes
Proficiency Python programming with Apache Spark with a focus on data processing and automation
Strong SQL skills and experience with relational databases
Familiarity with data warehousing concepts and best practices Exposure to cloud platforms like AWS Azure
Handson problemsolving skills andthe ability to troubleshoot complex data issues Handson experience with Snowflake
Remote Work :
No
Full Time