Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailNot Disclosed
Salary Not Disclosed
Title : Hadoop Engineer/ Architect
Location: Jersey city NJ
Relocation Acceptable
C2C
Experience : 12 Years
Visa : Any Visa
Job Summary:
We are seeking a talented Hadoop Engineer / Architect to join our team. The ideal candidate will have strong experience designing building and maintaining largescale data solutions using the Hadoop ecosystem. This role will involve working closely with crossfunctional teams to architect implement and optimize data processing systems for big data analytics and storage.
Key Responsibilities:
Architect and design scalable reliable and highperformance Hadoopbased big data solutions.
Manage and maintain Hadoop clusters ensuring optimal performance scalability and security.
Collaborate with data engineers and data scientists to design efficient data pipelines and ETL processes.
Develop Architect and design solutions for data ingestion processing and storage using tools within the Hadoop ecosystem such as HDFS Hive HBase MapReduce Pig Spark Flume and Kafka.
Implement monitoring tuning and troubleshooting strategies for performance optimization.
Ensure data integrity and implement security protocols for sensitive data.
Provide thought leadership and recommend enhancements to the existing architecture based on the latest Hadoop technologies and best practices.
Assist with the migration of legacy systems and ensure seamless data integration with the Hadoop ecosystem.
Guide the bank in meeting its product goals with deep focus on big data architecture modernization data monetization data availability and data management.
Collaborate with DevOps teams to ensure efficient deployment and automation of Hadoop solutions
Qualifications:
Bachelors/Masters degree in Computer Science Engineering or a related field.
5 years of experience working with Hadoop ecosystem components (HDFS Hive HBase etc.).
Proven expertise in data architecture and Hadoop cluster management.
Handson experience with Spark MapReduce and NoSQL databases.
Proficient in Java Python or Scala for data processing and scripting.
Strong understanding of distributed computing and parallel processing.
Experience with cloud platforms (AWS Azure GCP) and their big data solutions (e.g. Amazon EMR Azure HDInsight).
Knowledge of data governance security protocols and compliance.
Familiarity with DevOps practices including automation of deployments and scaling solutions.
Excellent problemsolving skills and ability to work in a fastpaced environment.
Preferred Skills:
Experience with containerization technologies (Docker Kubernetes).
Knowledge of machine learning tools and integration with Hadoop.
Experience in migrating onprem Hadoop clusters to cloud platforms.
Familiarity with CI/CD pipelines for big data solutions.
A product mindset is a must. Should have played a key role in formulating product centric data strategy for a financial services client.
Exposure to treasury areas like liquidity management payments or capital management will be a huge plus.
Technical skills: Big data cloud data management Data lineage data quality platforms data distribution architectures Enterprise data patterns across multiple data layers.
Full Time