Generative AI models are skilled in the art of bulls**t
Large language models are unconcerned with truth because they have no concept of it – and therein lies the danger
LIES are not the greatest enemy of the truth, according to the philosopher Harry Frankfurt. Bulls**t is worse.
As he explained in his classic essay On Bulls**t (1986), a liar and a truth teller are playing the same game, just on opposite sides. Each responds to facts as they understand them, and either accepts or rejects the authority of truth. But a bulls**tter ignores these demands altogether. “He does not reject the authority of truth, as the liar does, and oppose himself to it. He pays no attention to it at all. By virtue of this, bulls**t is a greater enemy of the truth than lies are.” Such a person wants to convince others, irrespective of the facts.
Sadly, Frankfurt died in 2023, just a few months after ChatGPT was released. But reading his essay in the age of generative artificial intelligence (GenAI) provokes a queasy familiarity. In several respects, Frankfurt’s essay neatly describes the output of AI-enabled large language models (LLMs). They are not concerned with truth, because they have no conception of it. They operate by statistical correlation, not empirical observation.
Share with us your feedback on BT's products and services