We are looking for a talented Snowflake Engineer with experience in Apache Kafka to design develop and optimize our data pipelines. In this role you will be responsible for building scalable highperformance data systems that integrate with various data sources and contribute to the overall data architecture. You will collaborate with crossfunctional teams to ensure seamless data flow and support advanced analytics initiatives.
Key Responsibilities:
- Design implement and maintain Snowflake data warehouse solutions ensuring high performance scalability and security.
- Develop and optimize ETL/ELT pipelines to ingest transform and load data into Snowflake from Kafka and other data sources.
- Work closely with data architects data analysts and data scientists to understand business requirements and translate them into technical solutions.
- Design and implement Kafkabased messaging systems to stream data in realtime to Snowflake.
- Troubleshoot and resolve datarelated issues including performance bottlenecks and data quality issues.
- Monitor and optimize data pipelines for efficiency scalability and costeffectiveness.
- Implement data governance and security practices to ensure compliance with organizational standards.
- Provide technical guidance and mentorship to junior engineers on Snowflake and Kafkarelated technologies.
- Stay updated on emerging technologies and best practices in data engineering and cloud services.
Required Skills and Experience:
- 4 years of handson experience with Snowflake data platform including data modeling performance tuning and optimization.
- Strong experience with Apache Kafka for stream processing and realtime data integration.
- Proficiency in SQL and ETL/ELT processes.
- Solid understanding of cloud platforms such as AWS Azure or Google Cloud.
- Experience with scripting languages like Python Shell or similar for automation and data integration tasks.
- Familiarity with tools like dbt Airflow or similar orchestration platforms.
- Knowledge of data governance security and compliance best practices.
- Strong analytical and problemsolving skills with the ability to troubleshoot complex data issues.
- Ability to work in a collaborative team environment and communicate effectively with crossfunctional teams.
Preferred Skills:
- Experience with other cloud data services like AWS Redshift Google BigQuery or Azure Synapse.
- Familiarity with containerization (Docker Kubernetes) and orchestration tools.
- Experience with machine learning models and integration with data pipelines.
Educational Qualification:
- Bachelor s or Master s degree in Computer Science Engineering Information Technology or related field.
Why Join Us:
- Work with innovative technologies in a fastpaced collaborative environment.
- Opportunity to contribute to cuttingedge data engineering solutions.
- Competitive salary and benefits.
- Continuous learning and career development opportunities.
elt,aws,scripting,dbt,compliance,data warehousing,data security,gcp,snowflake,big data,etl,shell,cloud platforms,machine learning,apache kafka,containerization,sql,cloud,security,collaboration,business intelligence tools,devops practices,programming concepts,airflow,orchestration,data governance,python,azure,kafka,communication