Responsibilities:
- Design develop and implement endtoend data pipelines utilizing ETL processes and technologies such as Databricks Python Spark Scala JavaScript/JSON SQL and Jupyter Notebooks.
- Act as a technology liaison to leadership and facilitate collaborate and integrate impactful and advanced technical solutions for largescale mission impacts.
- Create and optimize data pipelines from scratch ensuring scalability reliability and highperformance processing.
- Perform data cleansing data integration and data quality assurance activities to maintain the accuracy and integrity of large datasets.
- Leverage big data technologies to efficiently process and analyze large datasets particularly those encountered in a federal agency.
- Troubleshoot datarelated problems and provide innovative solutions to address complex data challenges.
- Implement and enforce data governance policies and procedures ensuring compliance with regulatory requirements and industry best practices.
- Work closely with crossfunctional teams to understand data requirements and design optimal data models and architectures.
- Collaborate with data scientists analysts and stakeholders to provide timely and accurate data insights and support decisionmaking processes.
- Maintain documentation for software applications workflows and processes.
- Stay updated with emerging trends and advancements in data engineering and recommend suitable tools and technologies for continuous improvement.
Requirements:
- US Citizenship Required: Only US Citizens are eligible for this position.
- Clearance: US Citizens Ivertix will sponsor security clearance.
- Experience: Minimum of 6 years of experience as a Data Engineer or in a similar role with a strong background in building and managing data pipelines.
- Big Data Technologies: Proficient with big data technologies such as Databricks Hadoop Spark and Kafka.
- Database Management: Experience with relational and NoSQL databases such as MySQL PostgreSQL MongoDB or Cassandra.
- Programming Languages: Strong programming skills in Python and Scala.
- Cloud Services: Experience with cloud platforms such as AWS Azure or Google Cloud and their data services.
- Data Modeling: Expertise in data modeling schema design and data warehousing.
- Data Security: Knowledge of data security best practices and regulatory compliance.
- Databricks: Experience with Databricks for data processing and analytics including building and managing data pipelines in Databricks environments.
- Problemsolving: Strong problemsolving skills and ability to work in a fastpaced dynamic environment.
- Communication: Excellent communication and collaboration skills with the ability to work effectively with crossfunctional teams.
- Detailoriented: Detailoriented mindset with a commitment to delivering highquality results.
- Location: Remote or 23 days/week onsite in the DC Metro area.
Nice to Have:
- Recent DoD or ICrelated experience.
- Certification in big data technologies or cloud data services
- Knowledge of Qlik/Qlik Sense QVD/QlikView and Qlik Production Application
Benefits:Our client provides a comprehensive benefits package designed to support your health financial wellbeing and professional development. This includes competitive healthcare coverage for you and your family a retirement savings plan with employer contributions ample paid time off including holidays opportunities for continuous learning and career advancement and more! Our client prioritizes a supportive work environment with policies that promote worklife balance and recognize employee contributions ensuring a rewarding experience for all team members.