Key Responsibilities:
- Cloud Solutions Design & Implementation: Architect deploy and maintain robust scalable and secure cloud solutions on platforms such as AWS and Azure.
- MLOps & Automation: Build and optimize MLOps pipelines on cloud platforms ensuring efficient deployment monitoring and scaling of machine learning models.
- CI/CD Pipeline Orchestration: Implement CI/CD pipelines using tools like GitLab CI GitHub Actions and Jenkins to automate and streamline workflows across data science projects.
- Data Pipeline & Engineering Infrastructure: Design develop and manage data pipelines and infrastructure to support largescale enterprise machine learning systems ensuring high availability and lowlatency processing.
- Infrastructure Management: Oversee network operations and infrastructure management for machine learning and AI systems ensuring optimized performance security and reliability.
- Collaboration with DevOps: Collaborate with DevOps teams to manage containerization (Docker Kubernetes) and infrastructure as code (Terraform CloudFormation) for ML model deployment and monitoring.
- Deep Learning & NLP: Apply cuttingedge Deep Learning and Natural Language Processing techniques to solve complex problems particularly in areas like text analysis speech recognition and language generation.
- Large Language Model (LLM) Infrastructure: Design and maintain infrastructure supporting Large Language Models (LLMs) enabling scalable deployment optimization and realtime inference.
- Continuous Learning & Development: Stay updated with the latest advancements in data science cloud infrastructure and MLOps to drive innovation within the organization.
Skills & Qualifications:
- 34 years of experience in Data Science Machine Learning or related fields with expertise in cloud infrastructure and MLOps.
- Handson experience with cloud platforms such as AWS Azure or GCP with strong knowledge of cloudnative services for machine learning and AI.
- Expertise in MLOps tools like MLflow Kubeflow or SageMaker.
- Strong proficiency in CI/CD pipelines using tools such as GitLab CI GitHub Actions or Jenkins.
- Deep knowledge of DevOps practices and tools including Docker Kubernetes Terraform and Ansible.
- Experience with deep learning frameworks such as TensorFlow PyTorch or Keras.
- Handson experience with Natural Language Processing (NLP) libraries and techniques.
- Strong programming skills in Python R or similar languages.
- Proven ability to design deploy and manage largescale LLM infrastructure.
- Understanding of network operations and infrastructure management for cloud environments.
- Strong analytical and problemsolving skills with a focus on delivering scalable solutions.
machine learning,data science,cloud,pipelines,infrastructure