Job Description:
- Technical Experience 6 8 years of related Pyspark AWS (Glue EMR Lambda & Steps functions S3)
- 3 years experience in Bigdata/ETL with PythonSparkHive experience and 3 years experience in AWS.
- Strong experience in PySpark.
- Strong experience in Python
- Strong experience in Unix scripting SparkSQL Hive.
- Experience in writing SQLs view creation.
- Excellent oral and written communication skills.
- Experience in Insurance domain is an added advantage.
- Good understanding on Hadoop Ecosystem and Architecture. (HDFS Map Reduce Pig Hive Oozie Yarn).
- Knowledge in AWS services like Glue AWS S3 Lambda function Step Function EC2.
- Data migration exposure from one platform (Hive/S3) to new platform (Data Bricks).
- Ability to prioritize plan organize and manage multiple tasks efficiently while maintaining a high quality of work product.
-
Key Skills (Primary): Pyspark AWS (Glue EMR Lambda & Steps functions S3) Big data with PythonSparkHive experience. Big data Migration exposure.
Key Skills (Secondary): Informatica BDM/Power center Data Bricks MongoDB
Big Data,Hadoop,Spark,Scala,Python,Hive,Big Data Migration,AWS,Pyspark