Do you love a career where you Experience Grow & Contribute at the same time while earning at least 10% above the market If so we are excited to have bumped onto you.
If you are a Azure Databricks Data Engineer Position looking for excitement challenge and stability in your work then you would be glad to come across this page.
We are an IT Solutions Integrator/Consulting Firm helping our clients hire the right professional for an exciting long term project. Here are a few details.
Check if you are up for maximizing your earning/growth potential leveraging our Disruptive Talent Solution.
Role:Azure Databricks Data Engineer
Location: Hyderabad/Bangalore/Gurgaon/Mumbai/Pune
Hybrid Mode Position
Exp: 5 years
Requirements
We are looking for a skilled Data Engineer to join our team. The ideal candidate should have experience in Databricks SQL PySpark Spark Python and Azure Data Factory (ADF). The candidate should be able to design develop and maintain data pipelines and data streams. The candidate should also be able to move/transform data across layers (Bronze Silver Gold) using ADF Python and PySpark.
Responsibilities:
Design and build modern data pipelines and data streams.
Move/Transform data across layers (Bronze Silver Gold) using ADF Python and PySpark.
Develop and maintain data pipelines and data streams.
Work with stakeholders to understand their data needs and provide solutions.
Collaborate with other teams to ensure data quality and consistency.
Develop and maintain data models and data dictionaries.
Develop and maintain ETL processes.
Develop and maintain data quality checks.
Develop and maintain data governance policies and procedures.
Develop and maintain data security policies and procedures.
Provide technical guidance and mentorship to junior data engineers
Requirements:
Experience in Databricks SQL PySpark Spark Python and Azure Data Factory (ADF).
Experience in designing developing and maintaining data pipelines and data streams.
Experience in moving/transforming data across layers (Bronze Silver Gold) using ADF Python and PySpark.
Experience in working with stakeholders to understand their data needs and provide solutions.
Experience in collaborating with other teams to ensure data quality and consistency.
Experience in developing and maintaining data models and data dictionaries.
Experience in developing and maintaining ETL processes.
Experience in developing and maintaining data quality checks.
Experience in developing and maintaining data governance policies and procedures.
Experience in developing and maintaining data security policies and procedures.
Benefits
Communication Skills Negotiation skills