1
Role**
Developer
2
Required Technical Skill Set**
Hadoop Python PySpark HIVE
Desired Competencies (Technical/Behavioral Competency)
MustHave**
Handson experience of Hadoop Python PySpark Hive Big Data Eco System Tools.
Should be able to develop tweak queries and work on performance enhancement.
Solid understanding of objectoriented programming and HDFS concepts
The candidate will be responsible for delivering code setting up environment connectivity deploying the code in production after testing.
GoodtoHave
Preferable to have good DWH/ Data Lake knowledge.
Conceptual and creative problemsolving skills ability to work with considerable ambiguity ability to learn new and complex concepts quickly.
Experience in working with teams in a complex organization involving multiple reporting lines
The candidate should have good DevOps and Agile Development Framework knowledge
Responsibility of / Expectations from the Role
Need to work as a developer in Cloudera Hadoop.
Work on Hadoop Python PySpark Hive SQL s Bigdata Eco System Tools.
Experience in working with teams in a complex organization involving multiple reporting lines.
The candidate should have strong functional and technical knowledge to deliver what is required and he/she should be well acquainted with Banking terminologies.
The candidate should have strong DevOps and Agile Development Framework knowledge.
Create Python/PySpark jobs for data transformation and aggregation.
Experience with streamprocessing systems like SparkStreaming.
agile development framework,hive,hadoop,big data eco system tools,framework,devops,python,spark,pyspark,eco,scala,agile