We seek a motivated and innovative Engineer to join our dynamic team. This role offers the opportunity to work on cuttingedge machine learning and artificial intelligence projects combining technologies and concepts such as GenAI Prompt Engineering Cloud and MLOps in order to produce agentic solutions. As an MLOps Engineer you will have the opportunity to engage in the conceptualization design and implementation of the solutions gaining valuable experience in this rapidly evolving field.
We consider the MLOps as pivotal in ensuring the seamless integration of machine learning models into production systems focusing on scalability monitoring and automation. You will be responsible for designing ML pipelines optimizing deployment workflows and maintaining infrastructure that supports reliable and efficient AIdriven applications.
Qualifications :
- At least 4 years of handson experience in DevOps and/or MLOps.
- Demonstrated experience in MLOps focusing on productionlevel ML deployment. Experience with deploying LargeScale LLMs is a bonus.
- Handson experience with any Cloud Platform (AWS Azure GCP). Bonus would be having onpremise automation experience.
- Proficiency with Docker Kubernetes and IaC tools like Terraform.
- Experience with CI/CD pipelines.
- Strong understanding of ML model lifecycle management.
- Familiarity with popular ML frameworks (e.g. TensorFlow PyTorch).
- Proficient scripting skills in Python Bash or similar.
- Familiarity with machine learning concepts including model training finetuning testing and evaluation.
- Familiarity with GenAI concepts tools and frameworks.
- Strong problemsolving and critical thinking skills with a systemsoriented mindset.
- Strong communication and interpersonal skills
Responsibilities
- Develop deploy and maintain scalable MLOps pipelines to automate key workflows.
- Use InfrastructureasCode (IaC) tools like Terraform or CloudFormation for automated deployments.
- Collaborate with data scientists engineers and other teams to create optimised productionready solutions.
- Deploy and orchestrate AIbased applications using Docker Kubernetes and/or other tools.
- Implement monitoring and tracking for AIbased applications ensuring robust alert systems and dashboards for model health and performance.
- Design and optimize CI/CD pipelines for ML workflows ensuring efficient and reliable deployment of models and supporting infrastructure.
- Develop and enforce best practices for MLOps including versioning and scalable deployments.
- Research and experiment with the latest advancements in machine learning and artificial intelligence.
- Create and present demos proofofconcepts and technical documentation for internal and clientfacing projects.
- Apply system thinking and critical analysis to tackle complex challenges and optimize algorithms and solution architectures.
- Participate in team workshops and knowledge sharing sessions to share insights as well as learn from peers.
Additional Information :
When joining Intertec you enter an environment that inspires you to learn grow and lead by example. At Intertec you can build a dynamic career in your field of interest work with professionals to gain industry insight and have exposure to international projects as well as enjoy ongoing learning and development opportunities.
Our team loves to have a good time express their personality and show it off to the world. Whether were celebrating a birthday gathering for a team event or celebrating a companys success we take pride in doing it with a smile. Life is too short not to have fun while you work.
Remote Work :
No
Employment Type :
Fulltime