Senior Hadoop Developer role based in Wilmington Delaware requiring inoffice work.
1year contract duration.
Open for USC or GC Holders.
Requires a minimum of 10 years of experience.
Must be familiar with integrating data streaming platforms like Kafka for realtime data ingestion and processing.
Should have at least 8 years of handson experience in the Hadoop ecosystem with expertise in PySpark.
Proficiency in writing and optimizing complex Hive and Spark queries is mandatory.
Experience in handling large volumes of data especially in the banking domain with a focus on data governance security and compliance is required.
Must have strong Linux commands knowledge for file handling process management and system troubleshooting.
Experience with Cloudera distribution including cluster management troubleshooting and performance tuning is necessary.
Must have handson experience with Oozie for scheduling and managing complex ETL workflows and job orchestration.
Ability to manage realtime data processing pipelines and largescale financial data for critical business applications is essential.
Indepth experience with Apache Spark distribution for highperformance distributed data processing tuning and optimization is required.
A basic understanding of Hadoop administration tasks including cluster monitoring tuning and troubleshooting familiarity in Ambari for cluster is mandatory.