Responsibilities:
As a Senior Data Engineer you will
Design and develop big data applications using the latest open source technologies.
Desired working in offshore model and Managed outcome
Develop logical and physical data models for big data platforms.
Automate workflows using Apache Airflow.
Create data pipelines using Apache Hive Apache Spark Apache Kafka.
Provide ongoing maintenance and enhancements to existing systems and participate in rotational oncall support.
Learn our business domain and technology infrastructure quickly and share your knowledge freely and actively with others in the team.
Mentor junior engineers on the team
Lead daily standups and design reviews
Groom and prioritize backlog using JIRA
Act as the point of contact for your assigned business domain
Requirements:
GCP Experience
2 years of recent GCP experience
Experience building data pipelines in GCP
GCP Dataproc GCS & BIGQuery experience
5 years of handson experience with developing data warehouse solutions and data products.
5 years of handson experience developing a distributed data processing platform with Hadoop Hive or Spark Airflow or a workflow orchestration solution are required
2 years of handson experience in modeling and designing schema for data lakes or for RDBMS platforms.
Experience with programming languages: Python Java Scala etc.
Experience with scripting languages: Perl Shell etc.
Practice working with processing and managing large data sets (multi TB/PB scale).
Exposure to test driven development and automated testing frameworks.
Background in Scrum/Agile development methodologies.
Capable of delivering on multiple competing priorities with little supervision.
Excellent verbal and written communication skills.
Bachelors Degree in computer science or equivalent experience.
The most successful candidates will also have experience in the following:
Gitflow
Atlassian products BitBucket JIRA Confluence etc.
Continuous Integration tools such as Bamboo Jenkins or TFS