Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailResponsibilities:
- Build and maintain Data Pipelines that process and clean heterogeneous data sources (internal and external, incl. 3rd party APIs);
-Manage our Data Lake (AWS s3);
-Set up and manage our Data Warehouse (AWS Redshift);
-Support the Data team by ensuring top-notch data accessibility;
- Work hand-in-hand with our DevOps team to cost-optimize our data usage;
-Work closely with Product, Engineering and Data leadership to make sound technical decisions across the entire data ecosystem;
-Build systems to track data quality and consistency, ensuring that our data is accurate and up-to-date;
-Implement monitoring tools to detect issues and measure the performance of Data Pipelines;
-Establish, maintain and monitor information security controls over Data Pipelines, Data Lake, and Data Warehouse;
-Become the Subject Matter Expert for all data engineering-related topics;
-Keep track of industry trends, best practices, and technologies to continually improve our technology (and ourselves!);
-Contribute to the hiring, mentorship, and management of junior data engineers (as the Data team expands).
Requirements:
- 4-5 years of experience in Data Engineer role;
-2+ years of experience working with ETL/ELT with large amount of data (using Python & SQL);
- 2+ years of experience with OOP (python);
- Expert knowledge of SQL and RDBMS concepts;
- Advanced working knowledge of cloud data architectures (preferably AWS s3, AWS Redshift, AWS Glue, Kinesis and RDS on AWS);
-Experience with job schedulers (e.g. Airflow);
-Knowledge of container technologies (Docker, Kubernetes) and CI pipelines (GitHub Actions, Jenkins);
- At least Upper-Intermediate English.
Freelance