Role: Sr AI Security Engineer
Location: New York NY (Hybrid)
Duration: Long Term
Key Responsibilities:
- Design implement and execute test approaches to GenAI systems (MyCity Chatbot) to identify security flaws particularly those impacting confidentiality integrity or availability of information.
- Perform various types of tests such as functional testing regression testing performance testing and usability testing to evaluate the behavior and performance of the AI algorithms and models.
- Create implement and execute test plans and strategies for evaluating AI systems including defining test objectives selecting suitable testing methods and identifying test scenarios.
- Document test methods results and suggestions in clear and brief reports to stakeholders.
- Perform security assessments including creating updating and maintaining threat models and security integration of Gen AI platforms. Ensure that security design and controls are consistent with OTIs security architecture principals.
- Design security reference architectures and implement/configure security controls with an emphasis on GenAI technologies.
- Provide AI security architecture and design guidance as well as conduct fullstack architecture reviews of software for GenAI systems and platforms.
- Serve as a subject matter expert on information security for GenAI systems and applications in cloud/vendor and onprem environments.
- Discuss AI/ML concepts proficiently with data science and ML teams to identify and develop solutions for security issues.
- Collaborate with engineering teams to perform advanced security analysis on complex GenAI systems identifying gaps and contributing to design solutions and security requirements.
- Identify and document defects irregularities or inconsistencies in AI systems and working closely with developers to rectify and resolve them.
- Ensure the quality consistency and relevance of data used for training and testing AI models (includes collecting preprocessing and validating data)
- Assess AI systems for ethical considerations and potential biases to make sure they follow ethical standards and encourage inclusivity and diversity.
- Collaborate with diverse teams including developers data scientists and domain experts to understand requirements validate assumptions and align testing efforts with project goals.
- Conducting research to identify vulnerabilities and potential failures in AI systems.
- Design and implement mitigations detections and protections to enhance the security and reliability of AI systems.
- Perform model input and output security including prompt injection and security assurance.
Mandatory Qualifications: MUST MEET ALL!
- Bachelors degree in computer science electrical or computer engineering statistics econometrics or related field or equivalent work experience
- 12 years of handson experience in cybersecurity or information security.
- 4 years of experience programming with demonstrated advanced skills with Python and the standard ML stack (TensorFlow/Torch NumPy Pandas and etc.)
- 4 years of experience working in cloud environment (Azure AWS and GCP)
- Demonstrated proficiency with AI/ML fundamental concepts and technologies including ML deep learning NLP and computer vision.
- Demonstrated ability (expertise preferred) in attacking GenAI products and platforms.
- Demonstrated recent experience with large language models.
- Demonstrated experience with using AI testing frameworks and tools such as TensorFlow or PyTorch or Keras
- Demonstrated ability to write test scripts automate test cases and analyze test results using programming languages and testing frameworks listed above.
- Demonstrated ability to Identify and document defects irregularities or inconsistencies in AI systems and working closely with developers to rectify and resolve them.
- Ability to work independently to learn new technologies methods processes frameworks/platforms and systems.
- Excellent written and verbal communication skills to articulate challenging technical concepts to both lay and expert audiences.
- Ability to stay updated on the latest developments trends and best practices in both software testing and artificial intelligence.
Desired Qualifications: MUST MEET AT LEAST 80%
- 4 years of experience with Natural Language Processing (NLP) and Large Language Models (LLM) desired
- Excellent problemsolving and critical thinking skills with attention to detail in an everchanging environment.
- Background in designing and implementing security mitigations and protections and/or publications in the space.
- Ability to work collaboratively in an interdisciplinary team environment
- Participated or currently participating in CTF/GRT/AI Red Teaming events and/or bug bounties developing or contributing to OSS projects.
- Understanding of ML lifecycle and MLOps.
- Perform various types of tests such as functional testing regression testing performance testing and usability testing to evaluate the behavior and performance of the AI algorithms and models.
- Ability to ensure the quality consistency and relevance of data used for training and testing AI models (includes collecting preprocessing and validating data).
- Ability to assess AI systems for ethical considerations and potential biases to make sure they follow ethical standards and encourage inclusivity and diversity
- Ability work in and provide technical leadership to crossfunctional teams to develop and implement AI/ML solutions including capabilities that leverage LLM technology.
- Highly flexible/willing to learn new technologies.
Note: Momento USA is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race color religion sex pregnancy sexual orientation gender identity national origin age protected veteran status or disability status.