Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailRole Responsibilities:
Design and optimize largescale data pipelines for financial data processing
Build ETL workflows and implement Big Data solutions (Hadoop Spark Hive)
Develop scalable code in Java Python or Scala
Collaborate with data scientists to drive insights
Ensure data quality security and performance
Mentor and lead a team of data professionals
Qualifications :
What Were Looking For:
5 years in Big Data development
Expertise in Hadoop Spark Hive
Proficiency in Java Python or Scala
Experience with cloud platforms (AWS Azure GCP)
High level of English (mandatory)
Strong problemsolving and teamwork skills
Nice to Have:
Knowledge of data governance CI/CD or machine learning frameworks
Others:
100% remote work (only from Spain)
Informacin adicional :
What We Offer:
Competitive salary based on experience
Career development opportunities
Ongoing training in tech competencies
Flexible working hours
Flexible benefits (health insurance meal vouchers transport cards etc.)
Are you ready to take your career to the next level Join a company that values your talent and growth! Send us your CV today and lets connect!
Remote Work :
Yes
Employment Type :
Fulltime
Remote