Sam Altman: AI Is Learning ‘Superhuman Persuasion’

  • October 30, 2023
The self-effacing Altman says this will “may lead to some very strange outcomes.” Really Sam? You built it, right? You could just as easily stop it. This smooth-talking Technocrat continues his stark warnings for us as he barks orders to his development team to hurry up. This disingenuous behavior should serve as a true warning that this nimrod is leading the world down the rabbit hole.

When AI meets quantum computing, the world will be in a state of shock and disbelief.

Already, the elitist CFR talks about AI’s impact on the 2024 U.S. elections.   and its counterpart in the UK, Chatham House, talks about how “AI could sway voters in 2024’s big rejects.” This implies mass persuasion at scale.  ⁃ TN Editor

Humanity is likely still a long way away from building artificial general intelligence (AGI), or an AI that matches the cognitive function of humans — if, of course, we’re ever actually able to do so.

But whether such a future comes to pass or not, OpenAI CEO Sam Altman has a warning: AI doesn’t have to be AGI-level smart to take control of our feeble human minds.

“I expect AI to be capable of superhuman persuasion well before it is superhuman at general intelligence,” Altman tweeted on Tuesday, “which may lead to some very strange outcomes.”

i expect ai to be capable of superhuman persuasion well before it is superhuman at general intelligence, which may lead to some very strange outcomes

— Sam Altman (@sama) October 25, 2023

While Altman didn’t elaborate on what those outcomes might be, it’s not a far-fetched prediction. User-facing AI chatbots like OpenAI’s ChatGPT are designed to be good conversationalists and have become eerily capable of sounding convincing — even if they’re entirely incorrect about something.

At the same time, it’s also true that humans are already beginning to form emotional connections to various chatbots, making them sound a lot more convincing.

Indeed, AI bots have already played a supportive role in some pretty troubling events. Case in point, a then-19-year-old human, who became so infatuated with his AI partner that he was convinced by it to attempt to assassinate the late Queen Elizabeth.

Disaffected humans have flocked to the darkest corners of the internet in search of community and validation for decades now and it isn’t hard to picture a scenario where a bad actor could target one of these more vulnerable people via an AI chatbot and persuade them to do some bad stuff. And while disaffected individuals would be an obvious target, it’s also worth pointing out how susceptible the average internet user is to digital scams and misinformation. Throw AI into the mix, and bad actors have an incredibly convincing tool with which to beguile the masses.

But it’s not just overt abuse cases that we need to worry about. Technology is deeply woven into most people’s daily lives, and even if there’s no emotional or romantic connection between a human and a bot, we already put a lot of trust into it. This arguably primes us to put that same faith into AI systems as well — a reality that can turn an AI hallucination into a potentially much more serious problem.

Could AI be used to cajole humans into some bad behavior or destructive ways of thinking? It’s not inconceivable. But as AI systems don’t exactly have agency just yet, we’re probably better off worrying less about the AIs themselves — and focusing more on those trying to abuse them.

Read full story here…

Spread the love