Guidewire CDA Integration: Utilize Guidewire Cloud Data Access (CDA) to extract transform and load (ETL) Guidewire data into downstream systems ensuring data availability for reporting and analytics.
Data Pipelines & ETL Development: Design build and maintain scalable and efficient ETL pipelines that integrate Guidewire data with data warehouses data lakes and other enterprise systems.
Data Modeling & Architecture: Work closely with Data Architects to develop optimize and manage Guidewire data models and schema ensuring high performance and scalability.
Cloud Integration: Implement cloudbased data engineering solutions using platforms like Azure ensuring smooth integration of Guidewire data with cloud services.
Data Quality & Governance: Ensure data integrity accuracy and compliance with data governance standards across all Guidewirerelated data pipelines and integrations.
Performance Tuning & Optimization: Optimize data processing workflows and queries to ensure high performance minimizing delays in data availability.
Collaboration: Collaborate with business analysts data architects and other IT teams to translate business requirements into effective data engineering solutions.
Automation: Build and maintain automation processes for regular data loads ensuring reliable data ingestion and processing with minimal manual intervention.
Documentation & Best Practices: Maintain clear documentation of data engineering processes data flows and pipeline architecture while adhering to industry best practices.
Technical Skills:
10 years of experience in data engineering or a similar role.
3 years of experience with Guidewire Insurance Suite (PolicyCenter BillingCenter ClaimCenter).
2 years of handson experience with Guidewire Cloud Data Access (CDA) for data extraction and integration.
Proven experience in building ETL pipelines and integrating Guidewire data with cloudbased and onpremises systems.
Strong SQL and PL/SQL skills for querying and transforming Guidewire data.
Proficiency with data integration and ETL tools such as Informatica PySpark etc.
Experience with Azure cloud platforms for data storage processing and integration.
Familiarity with big data technologies and modern data architecture.
Handson experience with APIs and microservices for data integration.
Knowledge of version control systems (e.g. Git) and CI/CD practices for automating data workflows.
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.