Responsibilities:
Works closely with the business owner / Product owner / Cross functional Team and gathering
technical requirements
Experience in building and maintaining reliable and scalable ETL pipeline on big data & / or Cloud
platform through the collection storage processing and transformation of large datasets
Collaborate with other teams to design develop and deploy data tools that support both operations
and product use cases
Work with the team in solving problems in big data technologies and prototype solutions to improve
our data processing architecture
Must Have:
Ensuring proper execution of tasks and alignment with business vision and objectives
Oversee activities of the junior data engineering teams
Works closely with the business owner / Product owner / Cross functional Team and gathering
technical requirements
Experience in building and maintaining reliable and scalable ETL pipeline on big data/Cloud platform
through the collection storage processing and transformation of large datasets
Experience working with varied forms of data infrastructure inclusive of relational databases such as
SQL Hadoop Spark
Proficiency in scripting languages in PySpark python
Experience in AWS cloud & DevOps
Experience in Databricks
Experience in Database design/data modelling
Must have strong experience in data warehouse concepts understanding Data Lake Lake House
Concepts any new Big Data eco system.
Experience in testing and validation in order to support the accuracy of data transformations and
data verification
Should be able to independently drive the requirements have solutioning mindset facing business
stakeholders
Soft Skills:
Excellent Verbal and Communication skills needed.
Should be a excellent team player.
Good knowledge on Agile principles and experience working in scrum teams using Jira
Should be comfortable to mentor junior engineers in the team (For Senior Data Engineers)
Should be able to operate in an ambiguous environment with minimal guidance.
sql,databricks,confluence,agile principles,collaboration,avaloq knowledge,banking and/or finance industry,unit testing (nunit),lake house concepts,communication,agile/scrum methodologies,octopus deploy,arm templates,api development,testing and validation,cloud platform,front-end development,scrum,mern,ui/ux designs,testing,interfaces (ami / capi),relational databases,mern stack,data processing architecture,deployment,python,pyspark,credits & loans,webservices (rest/soap),data tools development,devops,restful apis,build servers,swift messaging,big data,etl,database management,git,sql server,react.js,data modeling,source code configuration management,mongodb,checkmarx,data lake,data warehouse concepts,automated testing,"mern stack",stex trading,automated deployments,ci/cd pipeline,sonarqube,cloud platforms,rdbms (ms sql server),control m,version control,cloud hosting (azure preferred),aws,artifactory,bdd (jbehave),rally,c#,reporting,.net 2 .net core,iis,ibm mq,spark,forex,data modelling,node.js,jira,webapi,data tools,powershell,microsoft stack,wcf,hadoop,lake house,scm (git),microsoft azure/aws,code quality,database design,mmkt,ci/cd,jenkins,etl pipeline,client onboarding,asp.net