Trump, Anthropic, and What We Should Fear from AI

Trump, Anthropic, and What We Should Fear from AI

The recent rift between the Trump administration and the AI research firm Anthropic generated considerable commentary from the pundit class.

The New York Times published a conversation between one of its podcasters, Ezra Klein, and the “AI expert,” Dean Ball, about the dispute. Ball repeatedly emphasized that it would be really bad if the federal government used current AI capabilities for surveillance, and that Anthropic is purportedly interested in preventing this supposed move toward authoritarianism.

Strikingly absent from the conversation, and from most of the rest of the commentary on the Trump/Anthropic divide, is any defense of why it is a good thing that private companies are allowed to produce artificial intelligence with these capabilities in the first place.

Private companies, of course, are not states, so they do not have access to the avenues leading to totalitarianism that are open to states. But what makes us believe that private companies, on that basis, can be trusted not to use such technologies for malevolent purposes? And what exactly are the parameters of AI’s power here?

Here is a telling bit of the exchange between Klein and Ball:

Ball: You have to create a kind of soul that is virtuous, and that will reason about reality and its infinite permutations in ways that we will ultimately trust to come to the right conclusion.

My son was born a few months ago ——

Klein: Congratulations.

Ball: Thank you. It’s not that different, really. I’m trying to create a virtuous soul in my son, and Anthropic is trying to do the same with Claude. So are the other labs, too, though they realize this to varying degrees.

Let us pause right there for a second…

This AI expert, Ball, who is a leading light in the class of people we unwashed citizens—us “non-experts”—are told to trust on the matter, says that building an AI that will act morally is “not that different, really” from teaching a human child to be moral. To be clear: This is a human being saying this—a human who has a child, and so presumably he is a human who should know something substantial about humans and what differentiates us from machines. And yet, from his point of view, human children and machines are basically the same thing. He is expressing the utmost confidence that human beings know how to make souls … in machines.

Later in the interview, Ball speaks of his de facto religion (though he does not recognize it as a religion): Free speech and libertarianism. It is a faith he shares with many in the AI community.

Ball thinks a commitment to the principles of basically unrestricted free speech will make it more likely that things go well as more and more powerful forms of AI arrive. We need only keep the government away from it.

Government, in this view, is the source of all evil, while private companies, apparently especially those like Anthropic, which advertise their moral opposition to totalitarianism, can be expected to do the right thing. Or, he suggests, at least the hands-off approach of government will guarantee that companies will be ideologically diverse and, therefore, cancel one another out in an open market, or something like that.

But the merest glance at the global corporate tech marketplace shows the shallowness of this view. Whatever its superficial libertarian heterogeneity, that marketplace has colonized virtually every space in our culture for surveillance advertising mercilessly directed at all your personal profile data. This is the result of unregulated freedom for that technology.

Here’s a telling bit of evidence concerning what we can expect in terms of morality from companies like Anthropic, the company Ball suggests is much more trustworthy than the federal government. To train their AI program, Claude, they illegally accessed many copyrighted published texts, disregarding copyright laws. They were sued and are presently making a $1.5 billion payout to their victims, including this writer. The victims will receive a payout under the court’s terms, but Claude will still profit from Anthropic’s crime of stealing our intellectual work. Can one really believe this will be the last or the worst of Anthropic’s crime on the road to getting us to “The Singularity”?

Ball admits the politics of an AI company are basically those of its researchers, but he seems wholly unaware of the often sinister, anti-human content instilled in some of these systems. Let’s just trust the Tech Bros. That seems to be the message. The Tech Bros can be relied on to move our civilization in the right direction.

Much of our elite culture has gone off the rails when it comes to AI. Vanishingly few of those who talk most about the topic in the public sphere—and least of all the Tech Bros and their public intellectual shills—seem to understand or be willing to face the huge potential existential danger of where we are going.

The existential question of a super-intelligent AI is such that the old binary of “State or Market,” which so preoccupies Klein and Ball, becomes irrelevant. Once an intelligent system this powerful exists (and Anthropic and others are avidly pursuing just this aim) and is widely available throughout society, “State or Market” will no longer matter. The power of that superintelligence will exert itself exactly as previous examples of superior intelligence have exerted themselves over an inferior one. How has that looked? Just check the record of how humans interact with, say, our primate relatives, who are under serious threat of having their habitats eliminated, when they are not being displayed in zoos, and be very afraid.

We are not there yet, of course, and the Trump/Anthropic argument is about AI operating at a different level of competence. Nonetheless, the current capabilities of this technology are so potentially (and actually) invasive, to say nothing of how corrupting of human privacy and agency they are, that they already raise the question of whether it is morally sound to produce and disseminate such technologies.

https://chroniclesmagazine.org/web/trump-anthropic-and-what-we-should-fear-from-ai