Talking to Yourself: Defying Forgetting in Large Language Models

Explore SA-SFT, a novel self-augmentation framework designed to mitigate catastrophic forgetting in Large Language Models by injecting self-generated dialogu...

Level: advanced

By Unknown

Category: research