Job Title: Sr. Data Engineer
Location: Glendale CA 3 days a week onsite
Rate: $80 $90/hourly
The Company
Headquartered in Los Angeles this leader in the Entertainment & Media space is focused on delivering worldclass stories and experiences to its global audience. To offer the best entertainment experiences their technology teams focus on continued innovation and utilization of cuttingedge technology.
Our clients The Studio Technology group comprises intentional teams providing scalable secure and innovative technology solutions to enable the present and future of cinematic storytelling. As part of the Studios Data Services team youd join passionate focused technologists who solve crucial problems within the dynamic and expanding media industry. We store analyze report and visualize data and insights for everything from the financial engines that keep the Studio running to marketing and consumer applications that touch millions of people to production data that supports the creation of companys wide array of films and series. Were looking for a Sr. Data Engineer to help us grow the impact of our team.
You will:
- Contribute to maintaining updating and expanding existing Core Data platform data pipelines
- Build tools and services to support data discovery lineage governance and privacy
- Collaborate with other software/data engineers and crossfunctional teams
- Tech stack includes Airflow Spark Databricks Delta Lake and Snowflake
- Collaborate with product managers architects and other engineers to drive the success of the Core Data platform
- Contribute to developing and documenting both internal and external standards and best practices for pipeline configurations naming conventions and more
- Ensure high operational efficiency and quality of the Core Data platform datasets to ensure our solutions meet SLAs and project reliability and accuracy to all our stakeholders (Engineering Data Science Operations and Analytics teams)
- Be an active participant and advocate of agile/scrum ceremonies to collaborate and improve processes for our team
- Engage with and understand our customers forming relationships that allow us to understand and prioritize both innovative new offerings and incremental platform improvements
- Maintain detailed documentation of your work and changes to support data quality and data governance requirements
You could be a great fit if you have:
- 5 years of data engineering experience developing large data pipelines
- Proficiency in at least one major programming language (e.g. PythonJava Scala)
- Strong SQL skills and ability to create queries to analyze complex datasets
- Handson production environment experience with distributed processing systems such as Spark
- Handson production experience with data pipeline orchestration systems such as Airflow for creating and maintaining data pipelines
- Experience with at least one major Massively Parallel Processing (MPP) or cloud database technology (Snowflake Databricks Big Query).
- Experience in developing APIs with GraphQL
- Deep Understanding of AWS or other cloud providers as well as infrastructure as code
- Familiarity with Data Modeling techniques and Data Warehousing standard methodologies and practices
- Strong algorithmic problemsolving expertise