The "Godfather of AI", Geoff Hinton,, explains that one of the greatest risks is not that chatbots will become super-intelligent, but that they will generate text that is super-persuasive without being intelligent, in the manner of Donald Trump or Boris Johnson.
In a world where evidence and logic are not respected in public debate, Hinton imagines that systems operating without evidence or logic could become our overlords by becoming superhumanly persuasive, imitating and supplanting the worst kinds of political leader."
AI systems like ChatGPT are trained with text from Twitter, Facebook, Reddit, and other huge archives of bullshit, alongside plenty of actual facts (including Wikipedia and text ripped off from professional writers). But there is no algorithm in ChatGPT to check which parts are true. The output is literally bullshit, exactly as defined by philosopher Harry Frankfurt, and as we would expect from Graeber’s study of bullshit jobs.
Just as Twitter encourages bullshitting politicians who don’t care whether what they say is true or false, the archives of what they have said can be used to train automatic bullshit generators.
The problem isn’t AI. What we need to regulate is the bullshit