Senior Data Bricks Engineer
Experiene: 8 years
Contract Period: 3 to 6 months
Location: Remote
Skills: Mandatory Skills: Python SQL Spark Azure Databricks Azure Data Factory NoSQL
Job Description
Candidate Profile: Senior Data Bricks Engineer ; Need strong experience in Data Bricks
This candidate brings a unique blend of technical expertise in SQL and Python coupled with industryspecific experience in financial services making them wellsuited to tackle complex data challenges within a life insurance setting.
Experience:
o 610 years of handson experience in SQL and Python development.
o Proven track record of building complex data validation and transformation projects using both SQL and Python.
Industry Experience:
o Demonstrated experience working in a financial services company with a focus on life insurance sector.
Technical Skills:
o Strong practical knowledge in SQL and Python with an emphasis on creating reusable code.
o Strong skills in data manipulation tools and libraries (e.g. Pandas NumPy) for cleaning transforming and analyzing data.
o Experience with both relational (e.g. SQL Server) and NoSQL databases (e.g. CosmosDB MongoDB).
o Experience with data integration tools and techniques including ETL (Extract Transform Load) processes data pipelines and orchestration (e.g. Azure Data Factory).
o Proficient in writing base code that dynamically generates multiple code instances based on a configuration file (JSON format/SQL database format).
o Experience working in Azure cloud environment particularly with data lakes.
o A solid knowledge and experience in wellarchitected frameworks for data platforms in the cloud.
o Familiarity with writing SQL or Python code for accessing and processing data in a data lake including knowledge of delta and parquet formats.
o Demonstrated experience and skills in data modelling.
o Knowledge of Databricks Spark or similar frameworks for processing large datasets is considered an added advantage.
o Knowledge of Apache Spark including Spark SQL DataFrame/Dataset API.
o Proficiency in setting up and managing Databricks clusters notebooks and jobs within the Azure ecosystem.
o Understanding of Azure security best practices and compliance standards relevant to data engineering workflows in Azure Databricks.
o Designing implementing and optimizing endtoend data pipelines using Azure Databricks ensuring reliability scalability and performance.
o Utilizing Azure Databricks for data transformation tasks including cleansing aggregation and enrichment of datasets.
o Implementing automation for data workflows and monitoring pipelines to ensure data integrity and availability.
o Familiarity with version control systems like Git for managing code and notebooks.
Key Attributes:
o Detailoriented with a focus on data quality and integrity.
o Strong problemsolving skills and ability to optimize data workflows.
o Effective communicator and team player with the capability to collaborate with actuarial scientists and analysts
azure databricks,spark,sql,data pipelines,git,delta and parquet formats,nosql,pandas,etl,dataframe/dataset api,data lakes,databricks,python,azure cloud environment,numpy,azure,spark sql,azure data factory