

Possibly, reverse motivation - the training goal of such an agent would not be nice and smooth output, but shooting down misinformation.
But I have serious doubts about whether all of that is feasible, given the computational cost of running large language models.
It’s like with earthquakes. At first, when nobody knows jack s**t, they tell you 10 people died.
When the statistics come home, often enough, an initial 10 turns into 10 000.
With a heat wave spanning half a continent and breaking records, the typical mortality to expect (basing on experience) is at least 1000 people (some of them old and about to go anyway, but pushed over the edge by heat).