Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailNot Disclosed
Salary Not Disclosed
1 Vacancy
Designing and Building ETL pipeline using Sqoop Hive Map Reduce and Spark on onprem and cloud environments.
Functional Programming using Python and Scala for complex data transformations and inmemory computations.
Using Erwin for Logical/Physical data modeling and Dimensional Data Modeling.
Designing and developing UNIX/Linux scripts for handing complex File formats and structures
Orchestration of workflows and jobs using Airflow and Automic
Creating Multiple Kafka producers and consumers for data transferring
Performing Continous Integration and deployment (CI/CD) using tools like GIT Jenkin to run test cases and build applications with code coverage using Scala test
Analyzing data using SQL Big Query monitoring the cluster performance setting up alerts documenting the designs workflow.
Providing production support troubleshooting and fixing the issues by tracking the status of Running applications to perform System Administrator tasks.
Required Skills : Data Analysis
Basic Qualification :
Additional Skills : Data Engineer
Background Check : No
Drug Screen : No
Full Time