Description:
IMPORTANT: Suppliers should not submit workers whose physical residence is within the following states due to Intuitive tax and operating entity structure:
Alabama Arkansas Delaware Florida Indiana Iowa Louisiana Maryland Mississippi Missouri Oklahoma Pennsylvania South Carolina and Tennessee.
Please interpret this as Intuitive policy to which all suppliers are required to comply.
**********************************************************************************
***Important Notes to supplier: *** (US)
Subvending not allowed Worker need to be on your direct W2.
Need to do a properpre technical screening.
Need to have recruiter screening summary on top of resume.
Including Skill matrix and Writeup from worker on these skills is a plus.
All past projects should have Durations and locations.
Current location must be mentioned on the resume.
**********************************************************************************
Actual Title of the role: Data Engineer
Duration: 6 Months
Contract/possibility for conversions: NA
Max billBR/hr.
Onsite/Hybrid/Remote: Hybrid
Only Locals/Nonlocals can be submitted: Locals
Mode of interview: Zoom
No of rounds of interview: 2
New JP or Backfill position New position
Job Description: Data Engineer
We are seeking a skilled Data Application Engineer to design build and maintain datadriven applications and pipelines that enable seamless data integration transformation and delivery across systems. The ideal candidate will have a strong foundation in software engineering database technologies and cloud data platforms with a focus on building scalable robust and efficient data applications.
Key Responsibilities:
Develop Data Applications: Build and maintain datacentric applications tools and APIs to enable realtime and batch data processing.
Data Integration: Design and implement data ingestion pipelines integrating data from various sources such as databases APIs and file systems.
Data Transformation: Create reusable ETL/ELT pipelines to process and transform raw data into consumable formats using tools like Snowflake DBT or Python.
Collaboration: Work closely with analysts and stakeholders to understand requirements and translate them into scalable solutions.
Documentation: Maintain comprehensive documentation for data applications workflows and processes.
Required Skills and Qualifications:
Education: Bachelor s degree in Computer Science Engineering or a related field (or equivalent experience).
Programming: Proficiency in programming languages Python C# ASP.NET (Core)
Databases: Strong understanding of SQL database design and experience with relational (e.g. Snowflake SQL Server) databases
Data Tools: Handson experience with ETL/ELT tools and frameworks such as Apache Airflow (DBT Nice to Have)
Cloud Platforms: Familiarity with cloud platforms such as AWS Azure or Google Cloud and their data services (e.g. S3 AWS Lambda etc.).
Data Pipelines: Experience with realtime data processing tools (e.g. Kafka Spark) and batch data processing.
APIs: Experience designing and integrating RESTful APIs for data access and application communication.
Version Control: Knowledge of version control systems like Git for code management.
ProblemSolving: Strong analytical and problemsolving skills with the ability to troubleshoot complex data issues.
Preferred Skills:
Knowledge of containerization tools like Docker and orchestration platforms like Kubernetes.
Experience with BI tools like Tableau Power BI or Looker.
Soft Skills:
Excellent communication and collaboration skills to work effectively in crossfunctional teams.
Ability to prioritize tasks and manage projects in a fastpaced environment.
Strong attention to detail and commitment to delivering highquality results.
|
|
---|
Job Posting Type | |
Additional Details
- Preidentified worker (First Name Last Name) & Supplier Name : (No Value)
- Job Posting Type : Agency Recruited Worker Required
- Worker Legal Name (For Manager Sourced Only) : (No Value)