Do you love a career where you Experience Grow & Contribute at the same time while earning at least 10% above the market If so we are excited to have bumped onto you.
If you are a SPysparkSQL Developer Position looking for excitement challenge and stability in your work then you would be glad to come across this page.
We are an IT Solutions Integrator/Consulting Firm helping our clients hire the right professional for an exciting long term project. Here are a few details.
Check if you are up for maximizing your earning/growth potential leveraging our Disruptive Talent Solution.
Role:PysparkSQL Developer
Location: HyderabadBengaluru
Hybrid Mode Position
Exp: 3 8 Years
Requirements
We are seeking an experienced professional with a strong background in distributed systems data engineering and software development. The ideal candidate will possess handson expertise in working with Hadoop Spark SQL and PySpark along with experience in designing optimized data pipelines. This role involves working on largescale data processing systems and leading teams in delivering robust technical solutions.
Mandatory Requirements:
- Strong SQL skills particularly in complex programming.
- Handson experience in building and developing applications using PySpark.
- 58 years of relevant experience (35 years considered if the candidate demonstrates exceptional skill).
Qualifications:
- Bachelor s degree in Engineering Computer Science or equivalent. A Master s in Computer Applications or equivalent is also acceptable.
- 6 years of software development experience including leading engineering and scrum teams.
- 3 years of handson experience with MapReduce Hive Spark (Core SQL and PySpark).
- Solid understanding of data warehousing concepts.
- Familiarity with financial reporting ecosystems is a plus.
Key Skills:
- Expertise in distributed computing environments (e.g. MapReduce Hive Spark).
- Proficiency in programming with Core Java Python or Scala.
- Indepth knowledge of Hadoop and Spark architecture and operational principles.
- Experience in writing and optimizing complex SQL queries (Hive/PySpark DataFrames) for processing large volumes of data.
- Proficiency in UNIX shell scripting.
- Ability to design and develop optimized data pipelines for both batch and realtime data processing.
- Experience in system analysis design development testing and implementation.
- Demonstrated ability to develop and document both technical and functional specifications and analyze system processing flows.
Preferred Qualifications:
- Knowledge of cloud platforms such as GCP or AWS and experience in building scalable solutions and microservices is an advantage.
- 1 years of experience designing and building solutions using Kafka streams or queues.
- Experience working with version control systems like GitHub/Bitbucket and leveraging CI/CD pipelines.
- Familiarity with NoSQL databases (HBase Couchbase MongoDB) is a plus.
- Excellent technical and analytical aptitude.
- Strong communication and project management skills.
- Resultsdriven mindset.
Additional Details:
- This position requires at least 3 years of handson experience with MapReduce Hive Spark (Core SQL and PySpark).
- Strong verbal communication skills are essential.
Benefits
Technical Expertise: Proficiency in Application Designer and Application Engine, with a strong grasp of People Code. PeopleSoft Experience: 5 to 7 years of hands-on experience with PeopleSoft v9x. Functional Knowledge: Understand Core HR and NA Payroll applications, catering to both Canadian and US markets.