In a recent paper “Beware of Botshit: How to manage the epistemic risks of generative chatbots” , Hannigan et al. introduce (I think) the concept of ‘botshit’.

It relates to a particular style of use of output from one of the currently-fashionable generative AI models - ChatGPT, Bard et al - and is defined as:

Chatbot generated content that is not grounded in truth and is then uncritically used by a human for communication and decision-making tasks.

And “not grounded in truth” is a common feature of today’s large language models, as at the end of the day:

…generative chatbots are not concerned with intelligent knowing but with prediction.

An obvious example of botshit in practice might be the hundreds of websites that NewsGuard identified as having been created by generative AI and then published without care or concern by humans

Botshit can be distinguished from “bullshit”, which necessitates the same style of applying potential nonsense to a task, but in this case the content is specified as having a human origin.

Human-generated content that has no regard for the truth which a human then applies for communication and decision-making tasks

And it’s distinguishable again from “hallucinations” which is what occurs when chatbots produce:

…coherent sounding but inaccurate or fabricated content

Because botshit requires not only that a hallucination may have occurred, but that a human actively used the results for a given task.

Once LLM content potentially containing a hallucination is used by a human, this application transforms it into botshit.

To date I’ve quite liked the “fluent bullshit” formulation for describing generative AI nonsense output, although botshit is both catchier and makes is easy to distinguish the specific act of using AI generated ramblings for a specific task from other adjacent concepts.