Qualifications
- Bachelors degree in Computer Science or Computer Engineering from an accredited university
- 10 years of relevant industry experience after completing education
- 5 years of Scala and Java application design and testing experience in the industry
- Strong working knowledge of Functional Programming paradigm and category theory in languages like Scala or Haskell
- Working experience with realtime Streaming and batch processing with Apache Spark and Apache Flink (Experience on one platform is also fine)
- Strong knowledge of distributed file systems memory management sharding and partitioning datasets/data frames
- Strong fundamentals in functional programming Objectoriented programming RESTful architectures Design Patterns Data Structures and algorithms
- Experience with Microservices Infrastructure management for Development; working on Docker Kubernetes Helm/Terraform
- Experience with Microsoft Azure and cloud services including exposure to PaaS services like service bus event hubs blob stores key vaults API managers Function Apps (serverless) and Azure Databricks
- Expertise in Scala is mandatory and Java is optional.
- Microservices implementation skills will be a plus
- Experience OAuth 2.0 (JWT) Swagger Postman Open API Specification
- Relational (SQL Server / Postgres); NoSQL (HBase) Delta Tables (Parquet and Avro formats)
- Big Data/Geospatial (HBase 2.1.6 (HDI 4.0 Geo mesa 3.0.0)
- Caching (Redis play caffeine or others)
- Experience working with cloud platforms services like Azure or AWS
- Good working knowledge of CI/CD environments (preferably Azure DevOps) Git or similar configuration management software; Build Automation (Maven)
- Knowledge of Testing Tools such as ScalaTest Junit and Mockito
Responsibilities
- The digital platform will enable products that integrate with connected CNH Industrial tractors sprayers and combines and enable a wide range of farm management capabilities
- Leading a small team of software engineers and data engineers and also contributing individually to design develop and test data pipelines for data parsing enrichment and processing
- Generate rapid prototypes for feasibility testing
- Contribute to growing team members building a strong cohesive team; and providing guidance mentorship
- Help and guide the team in their daytoday tasks
testing tools,sql server,apache spark,ci,scalatest,category theory,scala,cloud,algorithms,restful architectures,microsoft azure,oauth 2.0,postgres,nosql,big data,cd production,design patterns,cloud services,git,swagger,relational databases,object-oriented programming,partitioning,paas services,docker,kubernetes,distributed file systems,data caching,apache,terraform,maven,big data/geospatial (hbase, geomesa),caching,nosql (hbase),build automation (maven),cloud platforms,redis,testing tools (scalatest, junit, mockito),azure devops,delta tables (parquet and avro formats),caching (redis, play, caffeine),mockito,caffeine,junit,helm,memory management,parquet,ci/cd environments,relational (sql server/postgres),play,geospatial,build automation,microservices implementation,postman,sharding,aws,delta tables,restful architecture,microservices infrastructure management,avro formats,hbase,functional programming,api,java,apache flink,object oriented modeling,open api specification,data structures,testing