Lecture: Security of Artificial Intelligence (Comprimising Large Language Models LLMs)

Sdílet
Vložit
  • čas přidán 26. 06. 2024
  • Discover our first highlight event and recording from June 26th, 2024, where we delve(d) into an in-depth analysis of the #security vulnerabilities surrounding #LargeLanguageModels (LLMs). Despite their broad acceptance and use, LLMs are alarmingly prone to attacks, particularly through a subtle approach known as Indirect Prompt Injection. This technique enables attackers to seize control over a dialogue without the user's awareness.
    In this recording, we discuss this vulnerability-first discovered and published by sequire technology in 2023. The severity of this threat has led to its ranking as the top menace in language models by the OWASP Top 10.
    Our lecturer, Dr. Christoph Endres-a seasoned AI researcher and the managing director of sequire technology-will shed light on present and impending attacks on LLMs and explain why current defensive measures fall short.
    Join us to unravel the mysteries of AI security and learn how to better safeguard our digital world and applications. Subscribe and stay updated for more on #AI security!

Komentáře •