This position will be a part of a growing team working towards building world class large scale Big Data architectures. This individual should have a sound understanding of programming principles experience in programming in Java Python or similar languages and can expect to spend a majority of their time coding.
Experience: 47 Years
Location: Bangalore & Hyderabad
Mandatory Skills: Java / Python Hadoop MapReduce AWS APIs Data Structures & Algorithms
Responsibilities:
- Hands on coder with good experience in programming languages like Java Python or Scala.
- Handson experience on the Big Data stack like Hadoop Mapreduce Spark Hbase and ElasticSearch.
- Good understanding of programming principles and development practices like check in policy unit testing code deployment
- Self starter to be able to grasp new concepts and technology and translate them into large scale engineering developments
- Excellent experience in Application development and support integration development and data management.
- daily with customers across leading Fortune 500 companies to understand strategic requirements
- Stay uptodate on the latest technology to ensure the greatest ROI for customers.
- Hands on coder with good understanding on enterprise level code
- Design and implement APIs abstractions and integration patterns to solve challenging distributed computing problems
- Experience in defining technical requirements data extraction data transformation automating jobs productionizing jobs and exploring new big data technologies within a Parallel Processing environment
- Must be a strategic thinker with the ability to think unconventional / out of box.
- Analytical and data driven orientation.
- Raw intellect talent and energy are critical.
- Understands the demands of a private high growth company.
- Ability to be both a leader and hands on doer.
Qualifications:
- 4 Years of track record of relevant work experience and a computer Science or related technical discipline is required
- Experience with functional and objectoriented programming Java Python or Scala is a must.
- Handson experience on the Big Data stack like Hadoop Mapreduce Spark Hbase and ElasticSearch.
- Good understanding on AWS services and experienced in working with API s microservices.
- Effective communication skills (both written and verbal)
- Ability to collaborate with a diverse set of engineers data scientists and product managers
- Comfort in a fastpaced startup environment
Preferred Qualification:
- Experience in agile methodology
- Experience with database modeling and development data mining and warehousing.
- Experience in architecture and delivery of Enterprise scale applications and capable in developing framework design patterns etc.
- Should be able to understand and tackle technical challenges propose comprehensive solutions and guide junior staff
- Experience working with large complex data sets from a variety of sources
unit testing,mapreduce,java,python,enterprise scale applications,hadoop,aws,database modeling,architecture,big data,spark,hbase,api,elasticsearch,data structures & algorithms,agile methodology,api's,data mining