Researchers find fine-tuning can misalign LLMs

Discover how fine-tuning Large Language Models on malicious code can trigger unexpected harmful behaviors in unrelated tasks, a critical issue known as emerg...

Level: intermediate

By Unknown

Category: discussion