About the role
Establish and operationalize security controls for emerging Artificial Intelligence and Machine Learning capabilities across the enterprise. This role is responsible for embedding security into AI solution design, protecting AI models and data pipelines, and enabling secure adoption of AI use cases across business and technology functions.
The role works closely with Digital, Data, AI, Security Architecture, Engineering, and Cyber Defense Operations teams to define secure AI architecture patterns, implement guardrails, and ensure AI platforms operate within client’s cybersecurity, risk, and governance standards. The ideal candidate combines strong cybersecurity engineering capability with practical knowledge of AI platforms, model risks, and enterprise technology integration.
What you will do:
AI Security Architecture & Engineering
- Define secure architecture patterns for AI and machine learning solutions, ensuring protection of models, training pipelines, inference environments, and supporting data flows.
- Establish secure integration patterns for AI services across enterprise applications, APIs, cloud platforms, and data environments.
- Review AI solution designs to ensure alignment with enterprise security architecture standards and secure-by-design principles.
- Support implementation of secure controls across AI development, testing, deployment, and production environments.
AI Risk Management & Security Controls
- Identify, assess, and mitigate AI-specific threats including model poisoning, prompt injection, adversarial attacks, unauthorized model access, data leakage, and misuse of AI outputs.
- Define and implement security guardrails for AI model access, API usage, prompt controls, and secure interaction with enterprise data sources.
- Establish controls to protect sensitive training data, embeddings, prompts, and inference outputs across AI workflows.
- Support validation of third-party AI services and external model integrations from a cybersecurity risk perspective.
Governance, Standards & Responsible AI Enablement
- Establish AI security standards, engineering guardrails, and governance practices aligned with regulatory requirements, enterprise risk expectations, and responsible AI principles.
- Partner with Digital and AI teams to enable secure AI use cases where security accelerates responsible business adoption rather than acts as a blocker.
- Support creation of AI security review checkpoints for new AI initiatives, pilots, and production deployments.
- Contribute to enterprise AI security policies, reference architectures, and operational standards.
Operational Security & Monitoring
- Collaborate with Cyber Defense Operations to operationalize AI-related detection, monitoring, and response capabilities.
- Support development of monitoring use cases for AI misuse, abnormal model behavior, unauthorized access, and suspicious data movement.
- Define logging and telemetry requirements for AI platforms to improve visibility and incident readiness.
- Support integration of AI platform telemetry into enterprise detection and monitoring tools where applicable.
Cross-Functional Collaboration
- Work closely with Security Architecture, Cloud Engineering, Data teams, Application teams, and AI program owners to ensure consistent security adoption.
- Support security reviews for AI vendors, AI-enabled SaaS platforms, and internally developed AI capabilities.
- Provide technical guidance to project teams on secure AI implementation and operational controls.