Role : Bigdata Engineer
Minimum 5 to 10 Years of in Big Data Engineering related technology experience
Mandatory
- Expert level understanding of distributed computing principles (Big Data Processing).
- Expert level understanding and handson experience in designing Data Models.
- Handson experience in designing Data Pipelines for ETL process (Databricks/Delta Live table/Synapse/Data Factory).
- Experience with designing and setting up Delta lakehouse architecture (Data vault 2.0 Data mart/Star Schema Snowflake)
- Experience in applying best practises in big data sets (Query Optimization Data Partitioning Relative Filters)
- Good knowledge and handson experience in Apache Spark (Batch and Streaming data).
- Handson experience in programming with Python and SQL and maintaining code quality/test coverage.
- Data Engineering Methodology like (SCD complex analytical queries with huge amounts of data).
- Practitioner of AGILE methodology (Scrum/Kanban)
- Knowledge and experience in Code Management Code Versioning Git flow and Release Planning.
- Excellent communication presentation and documentation skills.
- Mindset taking initiative team player keen to learn adapt to changes.
Good to have
- Experience in setting up DevOps pipeline on Kubernetes.
- Working experience as a Data Engineer in a Cloud environment (Microsoft Azure).
- Experience with integration of data from multiple data sources (Telnet/MQTT/HTTP/PubSub).
- Work experience with streaming technologies like Apache Flink and Apache Storm.
- Experience with querying technologies like Trino DremIO.
- Handson experience with BI tools (Power BI Tableau)
Remote Work :
No
Employment Type :
Fulltime