Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailJoin a top Fortune 500 project in Canada as a Data Engineer. Contribute to innovative solutions and technology advancements. Apply now to make an impact with a dynamic team. This hybrid role is based in Toronto Ontario Canada.
Responsibilities
ProductDriven Development: Apply a productfocused mindset to understand business needs and design scalable adaptable systems that evolve with changing requirements.
Problem Solving & Technical Design: Deconstruct complex challenges document technical solutions and plan iterative improvements for fast impactful results.
Data Infrastructure & Processing: Build and scale robust data infrastructure to handle batch and realtime processing of billions of records efficiently.
Automation & Cloud Infrastructure: Automate cloud infrastructure services and observability to enhance system efficiency and reliability.
CI/CD & Testing: Develop CI/CD pipelines and integrate automated testing to ensure smooth reliable deployments.
CrossFunctional Collaboration: Work closely with data engineers data scientists product managers and other stakeholders to understand requirements and promote best practices.
Growth Mindset & Insights: Identify business challenges and opportunities using data analysis and mining to provide strategic and tactical recommendations.
Analytics & Reporting: Support analytics initiatives by delivering insights into product usage campaign performance funnel metrics segmentation conversion and revenue growth.
AdHoc Analysis & Dashboarding: Conduct adhoc analyses manage longterm projects and create reports and dashboards to reveal new insights and track key initiative progress.
Stakeholder Engagement: Partner with business stakeholders to understand analytical needs define key metrics and maintain a datadriven approach to problemsolving.
CrossTeam Partnership: Collaborate with crossfunctional teams to gather business requirements and deliver tailored data solutions.
Data Storytelling & Presentation: Deliver impactful presentations that translate complex data into clear actionable insights for diverse audiences.
Minimum Qualifications
Educational Background: Bachelors degree in Computer Science Engineering or a related field or equivalent training fellowship or work experience.
Industry Experience: 57 years of industry experience in big data systems data processing and SQL databases.
Spark & PySpark Expertise: 3 years of experience with Spark data frames Spark SQL and PySpark for largescale data processing.
Programming Skills: 3 years of handson experience in writing modular maintainable code preferably in Python and SQL.
SQL & Data Modeling: Strong proficiency in SQL dimensional modeling and working with analytical big data warehouses like Hive and Snowflake.
ETL Tools: Experience with ETL workflow management tools such as Airflow.
Business Intelligence (BI) Tools: 2 years of experience in building reports and dashboards using BI tools like Looker.
Version Control & CI/CD: Proficiency with version control and CI/CD tools like Git and Jenkins CI.
Data Analysis Tools: Experience working with and analyzing data using notebook solutions such as Jupyter EMR Notebooks and Apache Zeppelin.
APPLY NOW!
NearSource Technologies values diversity and is committed to equal opportunity. All qualified applicants will be considered regardless of their race color religion sex sexual orientation gender identity national origin disability or status as protected veterans.
Full Time