Join our Innovation Team, where we explore cutting-edge concepts at the intersection of Machine Learning and Security. Our mission is to develop forward-looking solutions—such as model protection, privacy-preserving ML, security for agentic AI, and anomaly detection—that will later be integrated into our Edge products. This requires high-level innovation skills combined with a hands-on mindset.
If you are passionate about building secure AI systems, exploring new ideas, and turning concepts into prototypes, this role is for you:
Define strategies and implement solutions for protecting ML models and sensitive data during deployment. Focus areas include IP protection, privacy-preserving inference, and resilience against adversarial manipulation.
Design and implement model obfuscation and secure packaging techniques.
Develop IP protection strategies.
Enable secure execution environments for customer models using TEEs.
Assess and mitigate adversarial ML threats (evasion, poisoning attacks).
Define privacy-preserving inference mechanisms (e.g., differential privacy).
Advise on compliance with AI security and privacy regulations (GDPR, EU AI Act).
Have a background in Computer Science, Cybersecurity, or Cryptography and a strong interest in applied ML, OR
Have a background in Machine Learning and an interest in cybersecurity.
Knowledge of model protection techniques and IP security.
Familiarity with adversarial ML attacks and defenses.
Understanding of TEEs and secure enclaves.
Knowledge of privacy-preserving ML concepts (differential privacy, federated learning basics).
Awareness of regulatory frameworks for AI security and privacy.
Please note: The successful candidate may/will be responsible for security related tasks. The assignment may/will be in scope of security certifications, therefore a conscious and reliable way of working is necessary.
#LI-a8a1