Post-Doctoral Researcher in Agentic AI Security Frameworks
If you are enthusiastic in shaping Huawei’s European Research Institute together with a multicultural team of leading researchers, this is the right opportunity for you!
About Huawei
Huawei is a global leader in information and communications technology (ICT), renowned for its pioneering work in AI hardware, large-scale computing infrastructure, and integrated software-hardware solutions. With a workforce of over 194,000 employees across more than 170 countries, Huawei operates the world’s largest R&D organization, including advanced research centers dedicated to next-generation AI and processor technologies.
At Huawei, innovation isn’t just a buzzword—it’s built into the DNA of the company. Its full-stack AI ecosystem spans from the hardware accelerator’s architecture to the firmware, system integration, workload scheduling, all the way to algorithm optimizations. Meanwhile, applications cover a wide range of scenarios from wearables all the way to entire clusters & data centers.
About the lab
With more than 20 sites across Europe, and over 1500 researchers, Huawei’s European Research Institute (ERI) oversees fundamental and applied technology research, academic research cooperation projects, and strategic technical planning across our network of European R&D facilities.
This specific role is based at Huawei’s Zurich Research Center, launched 6 years ago and is already home to more than 160 experts. You would be joining the AI Computing Group within the Computing Systems Lab, a dynamic team of 20+ researchers focused on advancing AI solutions across hardware, systems, software, and algorithms. Join us at the forefront of AI computing systems innovation!
Problem Statement
AI models that can meticulously reason, prepare multi-stage plans, and execute them to achieve complex tasks are fundamental for building future AGI systems. It is crucial that the LLM models not only generate very high-quality planning and responses but also that they can prepare code snippets to execute, test, self-reflect, or even reach out to the outside world to not only ensure the correctness of the response but also ensure quasi-human level intelligence to guide us in achieving complex tasks. However, an AI model that can execute code or communicate with the outside world (e.g., the internet) poses a monumental security threat. Recent research shows that there exist numerous ways to divert a safe response of an AI agent by either influencing its alignment – generate malware or use unsafe APIs, or use excessive permissions to leak sensitive user information to the outside world (e.g., directly post user query to a social media website), polluting long term model context, generate malicious code and so on.
Therefore, the security and safety of the current LLM-based agentic systems are questionable and is an open research question. Together, we will investigate the broader security aspect of the agentic systems and design and verify the fundamental building blocks necessary for a trustworthy AI agent system.
Responsibilities
We are designing next-generation, trustworthy, reasoning, agentic systems and investigating potential attack surfaces to mitigate them. Specifically building the algorithms, tools, and systems for efficient and highly secure agents that can solve complex tasks by information retrieval, code generation, tool calling, communicating with other agentic systems, and so on.
As a postdoctoral researcher, your responsibility is to contribute to these research endeavours to identify new attacks against complex agentic systems, design mitigation strategies, and invent new building blocks that prevent unintended sensitive user data leaks and harmful behaviour. You will also be involved in the rigorous security analysis and formal/semi-formal verification of the secure agentic systems. In summary, you will be contributing to the fundamental AI security research, prototyping, producing research papers for top-tier AI and security venues, and involved in writing patents.
Requirements
• You have a PhD in computer science, specifically either in AI or security, from a reputable university.
• Candidates with a Security Background
Very good understanding of OS kernels and low-level software architecture.
Strong understanding of low-level system (C/C++) programming.
Experience with either TEEs (SGX/SEV/TrustZone) or sandboxing mechanisms
• Candidates with an AI background
Very strong foundation in AI theory backgrounds.
Understanding of inference, training frameworks.
Some familiarity with AI attacks, defense mechanisms (e.g., prompt injection guard)
• General Linux power user skills are an asset.
• Ability to work independently on nontrivial analysis and development tasks
• Strong communication skills, ability to perform and present a detailed analysis of experimental results
• Strong motivation to join a cutting-edge industrial research environment
By applying to this position, you agree with our PRIVACY STATEMENT. You can read in full our privacy policy at https://www.huawei.com/en/privacy-policy.
- Department
- Computing Systems
- Locations
- Huawei Research Center Zürich
Huawei Research Center Zürich
About Huawei Switzerland
Already working at Huawei Switzerland?
Let’s recruit together and find your next colleague.