drjobs Big Data Operations Engineer العربية

Big Data Operations Engineer

Employer Active

The job posting is outdated and position may be filled
drjobs

Job Alert

You will be updated with latest job alerts via email
Valid email field required
Send jobs
Send me jobs like this
drjobs

Job Alert

You will be updated with latest job alerts via email

Valid email field required
Send jobs
Job Location drjobs

Dallas - USA

Monthly Salary drjobs

Not Disclosed

drjobs

Salary Not Disclosed

Job Description

Role: Big Data Operations Engineer

Location: Dallas, TX (Day1 Onsite)

Duration: Contract

Job Responsibilities

Experience in setting up production Hadoop clusters with optimum configurations.

Drive automation of Hadoop deployments, cluster expansion and maintenance operations.

Manage Hadoop cluster, monitoring alerts and notification.

Job scheduling, monitoring, debugging and troubleshooting.

Monitoring and management of the cluster in all respects, notably availability, performance and security.

Data transfer between Hadoop and other data stores (incl. relational database).

Set up High Availability/Disaster Recovery environment.

Debug/Troubleshoot environment failures/downtime.

Performance tuning of Hadoop clusters and Hadoop Map Reduce routines.

Skills - Experience and Requirements

Experience with Kafka, SPARK etc.

Experience working with AWS big data technologies (EMR, Redshift, S3, Glue, Kinesis, Dynamodb, and Lambda)

Good knowledge on creation of Volumes, Security group rules, Key pairs, Floating IPs, Images and Snapshots and Deploying Instances on AWS.

Experience configuring and/or integrating with monitoring and logging solutions such as syslog, ELK (Elastic, Logstash, and Kibana).

Strong UNIX/Linux systems administration skills, including configuration, troubleshooting and automation

Knowledge of Airflow, NiFi, Streamsets, etc.

Knowledge of container virtualization

Skills : Experience in setting up production Hadoop clusters with optimum configurations. Drive automation of Hadoop deployments, cluster expansion and maintenance operations. Manage Hadoop cluster, monitoring alerts and notification. Job scheduling, monitoring, debugging and troubleshooting. Monitoring and management of the cluster in all respects, notably availability, performance and security. Data transfer between Hadoop and other data stores (incl. relational database). Set up High Availability/Disaster Recovery environment. Debug/Troubleshoot environment failures/downtime. Performance tuning of Hadoop clusters and Hadoop Map Reduce routines. Skills - Experience and Requirements Experience with Kafka, SPARK etc. Experience working with AWS big data technologies (EMR, Redshift, S3, Glue, Kinesis, Dynamodb, and Lambda) Good knowledge on creation of Volumes, Security group rules, Key pairs, Floating IPs, Images and Snapshots and Deploying Instances on AWS. Experience configuring and/or integrating with monitoring and logging solutions such as syslog, ELK (Elastic, Logstash, and Kibana). Strong UNIX/Linux systems administration skills, including configuration, troubleshooting and automation Knowledge of Airflow, NiFi, Streamsets, etc. Knowledge of container virtualization

Employment Type

Full Time

Company Industry

Accounting & Auditing

About Company

Report This Job
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.