drjobs Big Data Operations Engineer English

Big Data Operations Engineer

صاحب العمل نشط

هذا المنشور غير متاح الآن! ربما يكون قد تم شغل الوظيفة.
drjobs

حالة تأهب وظيفة

سيتم تحديثك بأحدث تنبيهات الوظائف عبر البريد الإلكتروني
Valid email field required
أرسل الوظائف
drjobs
أرسل لي وظائف مشابهة
drjobs

حالة تأهب وظيفة

سيتم تحديثك بأحدث تنبيهات الوظائف عبر البريد الإلكتروني

Valid email field required
أرسل الوظائف
الراتب الشهري drjobs

لم يكشف

drjobs

لم يتم الكشف عن الراتب

الوصف الوظيفي

Role: Big Data Operations Engineer

Location: Dallas, TX (Day1 Onsite)

Duration: Contract

Job Responsibilities

Experience in setting up production Hadoop clusters with optimum configurations.

Drive automation of Hadoop deployments, cluster expansion and maintenance operations.

Manage Hadoop cluster, monitoring alerts and notification.

Job scheduling, monitoring, debugging and troubleshooting.

Monitoring and management of the cluster in all respects, notably availability, performance and security.

Data transfer between Hadoop and other data stores (incl. relational database).

Set up High Availability/Disaster Recovery environment.

Debug/Troubleshoot environment failures/downtime.

Performance tuning of Hadoop clusters and Hadoop Map Reduce routines.

Skills - Experience and Requirements

Experience with Kafka, SPARK etc.

Experience working with AWS big data technologies (EMR, Redshift, S3, Glue, Kinesis, Dynamodb, and Lambda)

Good knowledge on creation of Volumes, Security group rules, Key pairs, Floating IPs, Images and Snapshots and Deploying Instances on AWS.

Experience configuring and/or integrating with monitoring and logging solutions such as syslog, ELK (Elastic, Logstash, and Kibana).

Strong UNIX/Linux systems administration skills, including configuration, troubleshooting and automation

Knowledge of Airflow, NiFi, Streamsets, etc.

Knowledge of container virtualization

Skills : Experience in setting up production Hadoop clusters with optimum configurations. Drive automation of Hadoop deployments, cluster expansion and maintenance operations. Manage Hadoop cluster, monitoring alerts and notification. Job scheduling, monitoring, debugging and troubleshooting. Monitoring and management of the cluster in all respects, notably availability, performance and security. Data transfer between Hadoop and other data stores (incl. relational database). Set up High Availability/Disaster Recovery environment. Debug/Troubleshoot environment failures/downtime. Performance tuning of Hadoop clusters and Hadoop Map Reduce routines. Skills - Experience and Requirements Experience with Kafka, SPARK etc. Experience working with AWS big data technologies (EMR, Redshift, S3, Glue, Kinesis, Dynamodb, and Lambda) Good knowledge on creation of Volumes, Security group rules, Key pairs, Floating IPs, Images and Snapshots and Deploying Instances on AWS. Experience configuring and/or integrating with monitoring and logging solutions such as syslog, ELK (Elastic, Logstash, and Kibana). Strong UNIX/Linux systems administration skills, including configuration, troubleshooting and automation Knowledge of Airflow, NiFi, Streamsets, etc. Knowledge of container virtualization

نوع التوظيف

دوام كامل

نبذة عن الشركة

الإبلاغ عن هذه الوظيفة
إخلاء المسؤولية: د.جوب هو مجرد منصة تربط بين الباحثين عن عمل وأصحاب العمل. ننصح المتقدمين بإجراء بحث مستقل خاص بهم في أوراق اعتماد صاحب العمل المحتمل. نحن نحرص على ألا يتم طلب أي مدفوعات مالية من قبل عملائنا، وبالتالي فإننا ننصح بعدم مشاركة أي معلومات شخصية أو متعلقة بالحسابات المصرفية مع أي طرف ثالث. إذا كنت تشك في وقوع أي احتيال أو سوء تصرف، فيرجى التواصل معنا من خلال تعبئة النموذج الموجود على الصفحة اتصل بنا