Elon Musk May Kill Us Even If Donald Trump Doesn't
Is social media the disease for which there is no cure?
If I were king, and my word was law, I would shut down the riot machine formerly known as Twitter. In the allotted 280 characters, you can only say just so much that is useful and instructive, but it’s more than you need to say something ugly, cruel and destructive. That’s why X has proved to be the perfect tool of demagogues like Donald Trump, who used the platform to urge his 87 million-odd followers to swarm the Capitol on January 6. And yes, I know that Barack Obama and Taylor Swift have more followers than Trump and use X to counsel peace, truth and good vibes. But that doesn’t cancel out X’s threat to democracy–especially as long as it remains in the grip of the increasingly crackpot Elon Musk, who has twice as many followers as Trump.
Of course if I were king, America wouldn’t be a democracy. But it is, and we regard both the rule of law and free speech as sacred. So X isn’t going anywhere, and neither is Facebook, YouTube or Tiktok. Neither, of course, is artificial intelligence, which may super-charge these platforms in ways we can barely imagine. How, then, are we to keep them from engulfing our democracy?
Is that a hyperbolic question? In his extraordinary 2021 book, The Constitution of Knowledge: A Defense of Truth, Jonathan Rauch, a scholar at Brookings, writes that modern societies have developed an implicit “epistemic” compact–an agreement about how we determine truth–that rests on a broad public acceptance of science and reason, and a respect and forbearance towards institutions charged with advancing knowledge. Today, Rauch writes, those institutions have given way to digital “platforms” that traffic in “information” rather than knowledge and disseminate that information not according to its accuracy but its popularity. And what is popular is sensation, shock, outrage. The old elite consensus has given way to an algorithm. Donald Trump, an entrepreneur of outrage, capitalized on the new technology to lead what Rauch calls “an epistemic secession.”
Oddly, Rauch foresees the arrival of “Internet 3.0,” in which the big companies accept that content regulation is in their interest and erect suitable “guardrails.” In conversation with me, Rauch said that social media companies now recognize that their algorithm are “toxic,” and spoke hopefully of alternative models like Mastodon, which eschews algorithms and allows users to curate their own feeds. It’s an appealing thought. However, Mastodon’s current user base is no more than 2.3 million; X has between 368 and 550 million active monthly users.
Jonathan Haidt, an NYU scholar, shares Rauch’s analysis but not his sanguine temperament. In an Atlantic essay, “Why The Past Ten Years of American Life have Been Uniquely Stupid,” and in a follow-up piece, Haidt argued that the Age of Gutenberg–of books and the depth understanding that comes with them–ended somewhere around 2014 with the rise of “Share,” “Like” and “Retweet” buttons that opened the way for trolls, hucksters and Trumpists. The new age of “hyper-virality,” he writes, has given us both January 6 and cancel culture–ugly polarization in both directions. On the subject of stupidification, we should add the fact that high school students now get virtually their entire stock of knowledge about the world from digital platforms.
Haidt proposed several reforms, including modifying Facebook’s “Share” function and requiring “user verification” to get rid of trolls. But he doesn’t really believe in his own medicine. When he and I spoke, Haidt said that the era of “shared understanding” is over–forever. When I asked if he could envision changes that would help protect democracy, Haidt quoted Goldfinger: “Do you expect me to talk?” “No, Mr. Bond, I expect you to die!”
Do we have to choose between blithe optimism and The End of Life as We Know It? Why can’t we regulate social media so as to preserve its benefits, like allowing teenagers to share hilariously dumb videos, while mitigating their worst aspects? What would that look like? Anti-trust law, such as the current investigation of Apple, which focuses on anti-competitive practices, doesn’t get at the real problem. Social media is a public health hazard–the cognitive equivalent of tobacco and sugary drinks. Adopting a public health model, we could, for examople, ban the use of algorithms to reduce virality, or even require social media platforms to adopt a subscription rather than advertising revenue model and thus remove their incentive to amass ev er more eyeballs.
We could, but we won’t, because unlike other public health hazards, digital platforms are forms of speech. Fox New is probably responsible for more polarization than all social media put together, but the federal government could not compel it–and all other media firms–to change its revenue model. Nor, I assume, could the state force Meta to drop or even modify its “Share” function, or X its “Retweet.” If Mark Zuckerberg or Elon Musk won’t do so out of concern for the public good–a pretty safe bet–they could be compelled to do so only by public or competitive pressure.
Can they? There are any number of non-profit and public-spirited experiments in social networks, liike Mastodon. (See here for a partial list.) Jonathan Zittrain, a Harvard law professor–for some reason almost everyone who thinks hard about this issue is named Jonathan–has suggested an alternative model of self-regulation that he calls “community governance.” Wikipedia, my own favorite digital platform, is a non-profit run by its own members. Zittrain expresses some hope that King Zuckerberg will voluntarily accede to, or at least experiment with, this model, which, again, is hard to imagine absent pressures that don’t currently exist.
Perhaps the answer lies elsewhere. According to a recent article in Foreign Affairs, the Taiwanese have staunchly resisted China’s persistent attempts to use disinformation to corrupt their electoral system. The Taiwanese have done all the right things; local groups have developed innovative fact-checking technology and the government has exposed local proxies. At bottom, however, Taiwan has provide resilient because its society is resilient; people reject China’s lies. We, here, don’t lack for fact-checkers, but rather for people willing to believe them. The problem is not the technology, but ourselves.
In polls, almost two-thirds of Americans say that social media is having negative effects on society. But you have to wonder if people really are repelled by our poisonous discourse, or by the hailstorm of disinformation, or if they just want to live comfortably inside their own bubble, and not somebody else’s. If Jonathan Haidt is right, it’s not because we’ve created a self-replicating machine that is destined to annihilate reason; it’s because we are the self-replicating machine.
Complete nonsense, when X was throwing folks out and censuring everyone who didn't vomit the approved narrative everything was fine, but as soon as the deplorable's are allowed a voice then it must be destroyed. Funny, I remember when the left was all about free speech and peace, now they are the war mongering censuring state-ist. Lastly, norm's have not degenerated they have been purposely and skillfully chiselled away at by those who need to create chaos in society for their own agenda.