I spent a lot of time writing ‘From Inputs To The Big Picture: An AI Roundup‘ largely because it’s a very big topic, but also because I spent a lot of time considering the persuasive aspect of AI.
GPT-4 is presently considered 82% more persuasive than humans and can now read emotions.
That, friendly reader, scares me, because we do not live in a perfect world where everyone has good intentions.
The key differences between manipulation and persuasion are about intention. An AI by itself has no intention, at least for now, but those that create it do have an intention. They could consciously manipulate an artificial intelligence through training data and algorithms, effectively becoming puppet-masters of a persuasive AI. Do they mean well?
Sure. Everyone means well. But what does ‘well’ mean for them? No villain ever really thinks they have bad intentions, despite what movies and television might have people think. Villains come dressed in good intentions. Good villains are… persuasive, and only those not persuaded might see a manipulation for what it is, even when the villain themself does not.
After all, Darth Vader didn’t go to the dark side for cookies, right?
There’s so much to consider with this. The imagination runs wild. It should. How much of the persuasion regarding AI is manipulation, as an example?
I think we’re in for a bit of trouble, and it’s already begun.