Discussion about this post

User's avatar
Douglas Webster's avatar

And here is more on this issue from Axios which has really been all over the background of AI development and its dangers:

Your fake friends are getting a lot smarter ... and realer, Jim VandeHei and Mike Allen write in a "Behind the Curtain" column.

Why it matters: If you think those make-believe people on Facebook, Instagram and X — the bots — seem real and worrisome now, just wait.

Soon, thanks to AI, those fake friends will analyze your feeds, emotions, and habits so they can interact with the same savvy as the realest of people.

The next generation of bots will build psychological profiles on you — and potentially billions of others — and like, comment and interact the same as normal people.

This'll demand even more vigilance in determining what — and who — is real in the digital world.

A taste of the future: Brett Goldstein and Brett Benson — professors at Vanderbilt University who specialize in national and international security — show in vivid detail, in a recent New York Times op-ed, the looming danger of the increasingly savvy fake world.

They dug through piles of documents uncovered by Vanderbilt's Institute of National Security, exposing how a Chinese company — GoLaxy — optimizes fake people to dupe and deceive.

"What sets GoLaxy apart," the professors write, "is its integration of generative A.I. with enormous troves of personal data. Its systems continually mine social media platforms to build dynamic psychological profiles. Its content is customized to a person's values, beliefs, emotional tendencies and vulnerabilities."

They add that according to the documents, AI personas "can then engage users in what appears to be a conversation — content that feels authentic, adapts in real-time and avoids detection. The result is a highly efficient propaganda engine that's designed to be nearly indistinguishable from legitimate online interaction, delivered instantaneously at a scale never before achieved."

🔎 Between the lines: This makes Russia's bot farms look like the horse and buggy of online manipulation. We're talking real-time adaptations to match your moods, or desires, or beliefs — the very things that make most of us easy prey.

The threat of smarter, more realistic fake friends transcends malicious actors trying to warp your sense of politics — or reality. It hits your most personal inner thoughts and struggles.

State of play: AI is getting better, faster at mimicking human nuance, empathy and connection.

Some states, including Utah and Illinois, are racing to limit AI therapy. But most aren't. So all of our fake friends are about to grow lots more plentiful.

Expand full comment

No posts