Data Poisoning in LLMs: Just 250 Documents Create a Backdoor

Discover how just 250 malicious documents can secretly compromise Large Language Models with backdoors, and learn simple steps to keep your AI systems safe a...

Level: beginner

By Shaik Hamzah

Category: discussion