Strong background in data processing & software engineering and can build highquality scalable dataoriented products.
Industry experience working with distributed data technologies (e.g. Hadoop MapReduce Spark EMR etc..) for building efficient largescale data pipelines.
Strong Software Engineering experience within depth understanding of Python Scala Java or equivalent.
Strong understanding of data architecture modelling and infrastructure
Experience with building workflows (ETL pipelines)
Experience with SQL and optimizing queries.
Problem solver with attention to detail who can see complex problems in the data space through end to end.
Willingness to work in a fastpaced environment.
Youll have (Qualification & Experience):
MS/BS in Computer Science or relevant industry experience.
It would be great if you have (Additional Skills):
Experience building scalable applications on the Cloud (Amazon AWS Google Cloud etc..)
Experience building streamprocessing applications (Spark streaming ApacheFlink Kafka etc..)
Experience with Databricks and Snowflake
python,spark,data engineering,cloud