Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailNot Disclosed
Salary Not Disclosed
1 Vacancy
********* W2 ONLY ROLE NO C2C Only qualified candidates located near the OFallon MO area to be considered due to the position requiring an onsite presence *********
Location: OFallon MO
Type: 9 months contract on W2
Role Overview:
Seeking an enthusiastic and adaptable Senior Data Engineer responsible for developing and managing data pipelines and assisting various datadriven requests across the company. The position involves a wide variety of technical skills from developing high throughput Spark jobs to tuning the performance of Big Data solutions. The ideal candidate will have experience working with largescale data and building automated data ETL processes. Data warehousing experience would be ideal.
Qualifications:
Excellent problemsolving skills with a solid understanding of data engineering concepts.
Proficient in Apache Spark with Python and related technologies.
Strong knowledge of SQL and performance tuning.
Experience in Big Data technologies like Hadoop and Oracle Exadata.
Solid knowledge of Linux environments and proficiency with bash scripting.
Effective verbal and written communication skills.
Nice to Have:
Knowledge or prior experience with Apache Kafka Apache Spark with Scala.
Orchestration with Apache Nifi Apache Airflow.
Java development and microservices architecture.
Build tools like Jenkins.
Log analysis and monitoring using Splunk.
Experience with Databricks AWS.
Working with large data sets with terabytes of data.
Key Responsibilities:
Build and maintain big data technologies environments and applications seeking opportunities for improvements and efficiencies.
Perform ETL (Extract Transform Load) processes based on business requirements using Apache Spark and data ingestion from Apache Kafka.
Work with various data platforms including Apache Hadoop Apache Ozone AWS S3 Delta Lake Apache Iceberg.
Utilize orchestration tools like Apache NiFi for managing and scheduling data flows efficiently.
Write performant SQL statements to analyze data with Hive/Impala/Oracle.
Full application development lifecycle (SDLC) from design to deployment.
Work with multiple stakeholders across teams to fulfill adhoc investigations including largescale data extraction transformation and analyses.
NOTES FROM HIRING MANAGER
ETL,Spark,Python,SQL,Data Engineering,Hadoop
Full Time