Yet ANOTHER AI Prediction Comes True
We're Racing Forward With Blinders On
I have written several recent posts about the risks and damage, both present and future, associated with the headlong rush to promote and develop Artificial Intelligence and Bitcoin. Our most recent post on this topic reported that one of our original predictions was happening - a multi-billion-dollar search for unique talent in AI development. Mark Zuckerberg and META offering a 24-year-old a quarter-billion dollars over four years to help the firm create an AI system smarter than the human brain. (And boy howdy does THAT raise lots of questions that the backers of AI are not about to talk about or even seek answers to.)
AI is already creating new class and economic divisions worldwide. Only those with the vast amounts of wealth already accrued have the power to drive AI forward. We will see an increased global bidding war for the talents of those uniquely intelligent individuals who will shape the future of AI technology for good or ill.
We also predicted this:
Dictatorships, much as they search out and idolize star athletes to represent them in global sports events, will do the same in identifying and employing the talents of their smartest citizens to achieve their own ends. Poor nations without the resources to compete will find themselves further marginalized.
And lo and behold, here’s the latest from the New York Times:
The Chinese government is using companies with expertise in artificial intelligence to monitor and manipulate public opinion, giving it a new weapon in information warfare, according to current and former U.S. officials and documents unearthed by researchers.
One company’s internal documents show how it has undertaken influence campaigns in Hong Kong and Taiwan, and collected data on members of Congress and other influential Americans.
While the firm has not mounted a campaign in the United States, American spy agencies have monitored its activity for signs that it might try to influence American elections or political debates, former U.S. officials said.
Artificial intelligence is increasingly the new frontier of espionage and malign influence operations, allowing intelligence services to conduct campaigns far faster, more efficiently and on a larger scale than ever before.
I am not vying for the Nostradamus Nobel prize for future forecasting. This development was entirely predictable based on past performance of national security and intelligence agencies worldwide. In fact, one of the prime arguments of AI zealots, when challenged about the risks of the technology, is “What…you want to cede leadership in this field to the Chinese?”
It sounds compelling, but it is really a diversionary cover-up. We know governments worldwide are racing to figure out how to use AI to gain political, military and geopolitical superiority. They would be crazy not to.
Frighteningly, for most of the world it will be crazy, dangerous and frankly disastrous if and when they do. Armed with a consolidated database of personal information about their populations, the potential for massive control, manipulation, blackmail and crackdown is truly frightening.
And as we also reported, this administration appears focused on doing just that:
The Trump administration CLEARLY sees the value of a central database, awarding a firm called Palantir massive contracts to create new government databases. According to This Week in Tech:
Founded by Peter Thiel and Alex Karp, Palantir has quietly become one of the most powerful and controversial data analytics companies in the world. Since Trump took office, the company has received over $113 million in federal contracts, including a massive $800 million Pentagon deal. But it's not just the money that's raising eyebrows; it's what Palantir plans to do with it.
The company is now in discussions with multiple government agencies, including the Social Security Administration and the Internal Revenue Service, about creating a centralized database that would combine information from traditionally separate government silos. As Leo Laporte explained on the show, this isn't just about efficiency but unprecedented surveillance capabilities.
Palantir's ability to cross-reference vast amounts of data makes it particularly powerful and potentially dangerous. The company made its reputation in Afghanistan and the Middle East by analyzing multiple data streams to predict IED locations and save lives. That same technology could now be turned on American citizens.
"The whole secret sauce of Palantir is kind of cross-referencing," Laporte noted during the discussion. "It's one thing to have the IRS have a database, Social Security Administration have a database. As long as they're not cross-referenced, it's much less dangerous. But as soon as you cross-reference everything, the government knows about you."
The Trump administration has reportedly sought access to hundreds of data points on citizens, including bank account numbers, student debt amounts, medical claims, and disability status. While officials frame this as improving government efficiency and eliminating information silos, the implications go far beyond streamlining bureaucracy.
Furthermore, let us not forget DOGE and Elon Musk. Musk and his team got access to a giant mass of government data by simply taking over control of databases within agencies. Given Musk’s clear propensity for breaking things and asking for permission later, AND his heavy investments in AI, it is virtually impossible to believe that he didn’t have his black hat hacker minions hoover up every last piece of data they could find and store it in Musk’s own servers. Assuming that is case, how likely is it, do you think, that he has not moved to create his own consolidated database? It is just too tempting.
There are further developments on the AI front, including mounting concerns over the massive energy demands these technologies are already generating, and how much of what is happening in the AI world is taking place out of sight and control of the public and potential regulators. I will speak to those concerns in future posts.




And here is more on this issue from Axios which has really been all over the background of AI development and its dangers:
Your fake friends are getting a lot smarter ... and realer, Jim VandeHei and Mike Allen write in a "Behind the Curtain" column.
Why it matters: If you think those make-believe people on Facebook, Instagram and X — the bots — seem real and worrisome now, just wait.
Soon, thanks to AI, those fake friends will analyze your feeds, emotions, and habits so they can interact with the same savvy as the realest of people.
The next generation of bots will build psychological profiles on you — and potentially billions of others — and like, comment and interact the same as normal people.
This'll demand even more vigilance in determining what — and who — is real in the digital world.
A taste of the future: Brett Goldstein and Brett Benson — professors at Vanderbilt University who specialize in national and international security — show in vivid detail, in a recent New York Times op-ed, the looming danger of the increasingly savvy fake world.
They dug through piles of documents uncovered by Vanderbilt's Institute of National Security, exposing how a Chinese company — GoLaxy — optimizes fake people to dupe and deceive.
"What sets GoLaxy apart," the professors write, "is its integration of generative A.I. with enormous troves of personal data. Its systems continually mine social media platforms to build dynamic psychological profiles. Its content is customized to a person's values, beliefs, emotional tendencies and vulnerabilities."
They add that according to the documents, AI personas "can then engage users in what appears to be a conversation — content that feels authentic, adapts in real-time and avoids detection. The result is a highly efficient propaganda engine that's designed to be nearly indistinguishable from legitimate online interaction, delivered instantaneously at a scale never before achieved."
🔎 Between the lines: This makes Russia's bot farms look like the horse and buggy of online manipulation. We're talking real-time adaptations to match your moods, or desires, or beliefs — the very things that make most of us easy prey.
The threat of smarter, more realistic fake friends transcends malicious actors trying to warp your sense of politics — or reality. It hits your most personal inner thoughts and struggles.
State of play: AI is getting better, faster at mimicking human nuance, empathy and connection.
Some states, including Utah and Illinois, are racing to limit AI therapy. But most aren't. So all of our fake friends are about to grow lots more plentiful.