drjobs PySpark Developer

PySpark Developer

Employer Active

1 Vacancy
drjobs

Job Alert

You will be updated with latest job alerts via email
Valid email field required
Send jobs
Send me jobs like this
drjobs

Job Alert

You will be updated with latest job alerts via email

Valid email field required
Send jobs
Jobs by Experience drjobs

4-6years

Job Location drjobs

Hyderabad - India

Salary drjobs

Not Disclosed

drjobs

Salary Not Disclosed

Vacancy

1 Vacancy

Job Description

Join DataEconomy and be part of a dynamic team driving datadriven solutions. Were seeking highly skilled PySpark developers with 46 years of experience to join our team in Hyderabad or Pune.

Responsibilities:

  • Design and implement robust metadatadriven data ingestion pipelines using PySpark.
  • Collaborate with technical teams to develop innovative data solutions.
  • Work closely with business stakeholders to understand and translate requirements into technical specifications.
  • Conduct unit testing system testing and support during UAT.
  • Demonstrate strong analytical and problemsolving skills as well as a commitment to excellence in software development.
  • Experience in the financial or banking domain is a plus.


Requirements

  • 46 years of experience in IT with a minimum of 3 years of handson experience in Python and PySpark.
  • Solid understanding of data warehousing concepts and ETL processes.
  • Proficiency in Linux and Java is a plus.
  • Experience with code versioning tools like Git AWS CodeCommit and CI/CD pipelines (e.g. AWS CodePipeline).
  • Proven ability to build metadatadriven frameworks for data ingestion.
  • Familiarity with various designs and architectural patterns.


  • Benefits

  • Opportunities for professional growth and development.
  • Be part of a dynamic and collaborative team.


  • Hiring PySpark Developer. Join DataEconomy and be part of a dynamic team driving data-driven solutions. We're seeking highly skilled PySpark developers with 4-6 years of experience to join our team in Hyderabad or Pune. Responsibilities: Design and implement robust metadata-driven data ingestion pipelines using PySpark. Collaborate with technical teams to develop innovative data solutions. Work closely with business stakeholders to understand and translate requirements into technical specifications. Conduct unit testing, system testing, and support during UAT. Demonstrate strong analytical and problem-solving skills and a commitment to excellence in software development. Experience in the financial or banking domain is a plus. Requirements: 4-6 years of experience in IT, with a minimum of 3 years of hands-on experience in Python and PySpark. Solid understanding of data warehousing concepts and ETL processes. Proficiency in Linux and Java is a plus. Experience with code versioning tools like Git, AWS CodeCommit, and CI/CD pipelines (e.g., AWS CodePipeline). Proven ability to build metadata-driven frameworks for data ingestion. Familiarity with various designs and architectural patterns. Benefits: Opportunities for professional growth and development. Be part of a dynamic and collaborative team.

    Employment Type

    Full Time

    Company Industry

    About Company

    Report This Job
    Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.