Your Pre-trained LLM is Secretly an Unsupervised Confidence Calibrator

This research introduces DACA, an unsupervised method that optimizes temperature scaling to reduce over-confidence in Large Language Models, ensuring more re...

Level: advanced

By Unknown

Category: research