Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailWe are looking for a skilled and experienced Big Data Engineer who is proficient in Hadoop MapReduce Java and Apache Spark. The ideal candidate should have a strong background in big data technologies and be able to optimize the cluster configurations and also finetune data processing workflows. As a member of our team you will play a key role in configuring the clusters improving performance understanding data patterns and recommending the best tools for our data needs.
Key Responsibilities:
Design develop and maintain Hadoop and MapReduce jobs for largescale data processing.
Finetune and optimize existing MapReduce processes for better performance.
Write efficient Java and Spark code to manage and process complex datasets.
Perform performance debugging to identify bottlenecks in data processing workflows and suggest improvements.
Analyze data patterns and trends to ensure data integrity and accuracy.
Evaluate and recommend suitable opensource tools and frameworks based on project requirements.
Overall 10 years of experience in the industry
5 years of experience working with Hadoop MapReduce Java and Apache Spark.
Strong understanding of distributed computing principles and big data processing.
Proven experience in performance optimization and troubleshooting in a big data environment.
Solid knowledge of data modeling data architecture and data processing frameworks.
Experience with opensource big data tools and a keen understanding of their best use cases.
Desired Skills and Qualities:
Excellent communication skills to effectively interact with team members and stakeholders.
Ability to understand complex data structures and derive meaningful insights.
Proactive in suggesting and implementing best practices for data management.
Adaptable and innovative mindset towards emerging big data technologies.
Strong problemsolving skills and attention to detail.
Full Time