We are building the first NO-CODE AI PLATFORM for the $12 Trillion AEC (Architecture, Engineering and Construction) Industry, with the 3rd Generation Explainable AI, that allows users to create complex AI use cases with zero coding at the frontend and neuro-symbolic AI under the hood.
Join a team of people committed to taking big leaps using AI.
Tasks
We are looking for a Machine Learning Integration Engineer to join our Munich office. You will work on optimising, deploying and integrating machine learning models within our software platform. The position entails responsibility for the entire model lifecycle from the software perspective, including how models communicate with the APIs, with data streams, with each other and with the database.
- Optimise AI Models (compression, quantization) on GPUs for both Cloud and Edge
- Manage AI model lifecycle, through training, validation and deployment
- Manage interprocess communication between AI models, and other functional units and processes such as APIs, data streams, DBs using middleware frameworks such as Kafka and ROS
- Deploy the optimised AI models and tailor them for specific Edge and Cloud environments
- Implement containerization of AI models and scale these using frameworks such as Kubernetes on public and private clouds
Requirements
We are looking for candidates, with demonstrable experience in Deep Learning. In addition to method knowledge, the candidate should have extensive industry experience in taking deep learning models to production and scaling on large-scale customer data.
- MS in Computer Engineering or related fields
- 5+ years of hands-on experience in deploying and integrating Deep Learning models within a larger software framework
- 5+ years of hands-on experience with Linux and good familiarity with Linux OS design, building and running software using the command line
- Experience in AI model optimization techniques (compression, quantization, etc)
- Experience in embedded systems, and real-time Edge OS development
- Experience in automation strategies for deployment of Deep Learning models.
- Demonstrable experiences in deployment and optimisation of AI systems on the Cloud, server and Edge hardware
- Experience in Parallel Computing Platforms/General-purpose computing on graphics such as CUDA, OPENCL
- Experience with middleware frameworks such as Kafka and ROS
- Experience with containerization of AI models and scaling these using frameworks such as Kubernetes
- Exceptional programming skills with C/C++ and Python
- Experience with computer architecture, low-level optimisation via compilers or manual
- Experience in SoC or/and GPU acceleration for AI
- Experience in peripheral communication protocols like SPI, UART, CAN, I2C, etc.
- Experience in at least one of the wireless communication protocols like LTE/5G, Bluetooth, Wifi, LoraWAN
- Experience with CI/CD systems such as GitHub Actions/GitLab/Jenkins is a plus
Benefits
Competitive salary + Stock options