Job description
Mandatory skills Data modelling big data Technology (Hive spark Hadoop) GCP cloud experience Roles/Responsibilities: - Develops and maintains scalable data pipelines to support continuing increases in data volume and complexity.
- Collaborates with analytics and business teams to improve data models that feed business intelligence tools increasing data accessibility and fostering datadriven decision making across the organization.
- Implements processes and systems to monitor data quality ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.
- Writes unit/integration tests contributes to engineering wiki and documents work.
- Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues.
- Works closely with a team of frontend and backend engineers product managers and analysts.
- Defines company data assets (data models) spark sparkSQL and hiveSQL jobs to populate data models.
- Designs data integrations and data quality framework.
- Basic Qualifications:
- BS or MS degree in Computer Science or a related technical field
- 4 years of Python or Java development experience
- 4 years of SQL experience (NoSQL experience is a plus)
- 4 years of experience with schema design and dimensional data modelling
- 4 years of experience with Big Data Technologies like Spark Hive
- 2 years of experience on data engineering on Google Cloud platform services like big query.
Job Title: Hi Applicants!!! Hiring for a Job in a Reputed Organization(Product and Service based company). Here is a Gateway to it through ALP Consulting. Recruiting Employment Type: Permanent Experience: Skills Required: Excellent Communication Skills Strong Experience in : Job Location: Pan India Note: Maximum 60 Days Notice Period will be Prioritized.