Join DataEconomy and be part of a dynamic team driving datadriven solutions. Were seeking highly skilled PySpark developers with 46 years of experience to join our team in Hyderabad or Pune.
Responsibilities:
- Design and implement robust metadatadriven data ingestion pipelines using PySpark.
- Collaborate with technical teams to develop innovative data solutions.
- Work closely with business stakeholders to understand and translate requirements into technical specifications.
- Conduct unit testing system testing and support during UAT.
- Demonstrate strong analytical and problemsolving skills as well as a commitment to excellence in software development.
- Experience in the financial or banking domain is a plus.
Requirements
46 years of experience in IT with a minimum of 3 years of handson experience in Python and PySpark.
Solid understanding of data warehousing concepts and ETL processes.
Proficiency in Linux and Java is a plus.
Experience with code versioning tools like Git AWS CodeCommit and CI/CD pipelines (e.g. AWS CodePipeline).
Proven ability to build metadatadriven frameworks for data ingestion.
Familiarity with various designs and architectural patterns.
Benefits
Opportunities for professional growth and development.
Be part of a dynamic and collaborative team.
Hiring PySpark Developer. Join DataEconomy and be part of a dynamic team driving data-driven solutions. We're seeking highly skilled PySpark developers with 4-6 years of experience to join our team in Hyderabad or Pune. Responsibilities: Design and implement robust metadata-driven data ingestion pipelines using PySpark. Collaborate with technical teams to develop innovative data solutions. Work closely with business stakeholders to understand and translate requirements into technical specifications. Conduct unit testing, system testing, and support during UAT. Demonstrate strong analytical and problem-solving skills and a commitment to excellence in software development. Experience in the financial or banking domain is a plus. Requirements: 4-6 years of experience in IT, with a minimum of 3 years of hands-on experience in Python and PySpark. Solid understanding of data warehousing concepts and ETL processes. Proficiency in Linux and Java is a plus. Experience with code versioning tools like Git, AWS CodeCommit, and CI/CD pipelines (e.g., AWS CodePipeline). Proven ability to build metadata-driven frameworks for data ingestion. Familiarity with various designs and architectural patterns. Benefits: Opportunities for professional growth and development. Be part of a dynamic and collaborative team.