Job Title: Lead Data Engineer
Location: Dallas Texas (Onsite)
Job Type: Full Time
EDH is seeking a Lead Data Engineer to join our Data Engineering team. In this role you will lead a team of Data Engineers in designing and building scalable data pipelines and systems. The ideal candidate will have advanced knowledge of Data Architecture batch processing frameworks and Data Modeling techniques to facilitate seamless Data Ingestion and Exports. Success in this role will require strong leadership skills technical expertise and the ability to drive impactful outcomes.
Responsibilities:
- Lead and mentor a team of data engineers to design develop and deploy scalable solutions.
- Collaborate with product and business stakeholders to deliver data solutions that meet user needs.
- Architect build and maintain data infrastructure ensuring reliability scalability and performance across various data sources and platforms.
- Implement Data Ingestion transformation and Data Quality features to support Application Engineering Analytics and various Business Verticals across the organization.
- Collaborate with Data Architect and Business Analysts to build data models to improve reliability and interpretability of data for analytical and business needs.
- Define and enforce coding standards and document best practices within the team.
- Actively contribute to the creation of design documents assess technologies and conduct Proof of Concepts (PoCs).
- Experience supporting production jobs and mitigate issues in a timely manner.
- Partner with senior leadership to define Data Strategy Roadmap and hiring for future needs.
- Foster a culture that embraces continuous learning and innovation.
Experience:
- Bachelors or masters degree in computer science Engineering or a related field.
- 7 years of experience as a Lead Engineer with a focus on building enterprise data solutions.
- 4 years of Cloud Experience: Azure (preferred)/AWS/GCP.
- Previous experience leading a team and in delivery of projects: provide a few examples of successful project deliveries and development methodologies in the interview.
- Proficiency in programming languages such as Python with experience in Modern Data Technologies like Spark Databricks and Kafka.
- Expertise with Microsoft SQL Server (preferred) or other relational databases.
- Strong experience in Microsoft ETL stack SSIS Azure Data factory.
- Expertise with SQL stored procedures triggers and performance tuning.
- Experience in building data lakes to support highspeed querying by end users.
- Strong communication skills with the ability to convey complex technical concepts to nontechnical stakeholders.
- Prior Experience working with Healthcare Information exchange standards like HL7 X12 EDI FHIR will be helpful.
- Familiarity with CI/CD (e.g. Azure pipelines) and IAC (e.g. Terraform) is a plus.
- Experience or knowledge with creating RESTful APIs and data visualizations is a plus.
If you meet the qualifications and are interested in this role please submit your resume to We look forward to reviewing your application.