Current Affairs is

Ad-Free

and depends entirely on YOUR support.

Can you help?

Subscribe from 16 cents a day ($5 per month)

Royalty reading issues of Current Affairs and frowning with distaste. "Proud to be a magazine that most royals dislike."

Current Affairs

A Magazine of Politics and Culture

The Singularity Prophets

The Singularity’s not coming to save us, but that doesn’t stop the world’s worst people from trying to bring it about.

Once the dust settles, and we look back on the decade that has just ended, my guess is that we will come to see the countless post-Y2K promises of artificial intelligence and machine learning for what they really were: immensely successful if you go by Bloomberg tickers and ad engagement, but empty in delivering on their utopian promises. Google promised to democratize information access. Instead, its algorithms reinforced the marginalization of women and people of color, and it mostly functioned to upsell rather than enlighten. Engineers at YouTube built a personalized recommendation algorithm that has dramatically increased watch times, but has also radicalized people with fringe content and traumatized children with creepy, violent, and scatological videos. Workers on Amazon’s Mechanical Turk platform became the unseen labor force of the machine learning revolution by annotating training data in exchange for wages no one could possibly hope to live on. (A median wage of $2.75 hourly, with only 4% of workers earning over $7, rates that are theoretically unlawful but of course aren’t thanks to the magical legal fiction of “independent contracting”). 

Our internet footprints have also been harvested many times over in the form of private, proprietary datasets and public datasets—a hundred million Flickr images here or a few billion Instagram posts there. Our data is indiscriminately “taken from the ‘wild,’ essentially as-is, with minimal effort to sanitize,” Facebook researchers tell us. It will be used to keep you more hooked and sell you more things. You may have heard that the ability of computers to classify dogs and cats is on the rise, but so too is the capacity for human classification, facial recognition software that can be used by states and companies to “identify” gay men and ethnic Uyghurs and criminals and bad workers and ugly ones, too. Oh, and if all that wasn’t bad enough, these machine learning models can have industrial-scale carbon footprints.

There are plenty of technocrats and self-identifying “rationalists” who are worried about the consequences of machine learning. Oxford philosophy professor Nick Bostrom says that we are facing “quite possibly the most important and most daunting challenge humanity has ever faced. And—whether we succeed or fail—it is probably the last challenge we will ever face.” Elon Musk calls the challenge “the single biggest existential crisis that we face and the most pressing one.” MIT Physicist Max Tegmark considers this the “most important conversation of our time.”

But what is the machine learning crisis they’re talking about? It is not that generations of black women have been told by Google search results that they are “angry” and objects of sexual gratification. It is not that our internet activity is being sold wholesale to companies as training data for ad personalization. In fact, it has nothing to do with this trend of mass marginalization and exploitation at the hands of a predatory and terrifying “surveillance capitalism.” Instead, they’re concerned about the prospect that machine learning will suddenly “explode” in growth and precipitate an apocalyptic event, better known as the singularity. When Peter Thiel says “people are spending way too much time thinking about climate change, way too little thinking about AI,” he isn’t referring to the way AI is used to spy on, manipulate, gaslight, and steal from people. He’s talking about the moment when artificially intelligent machines become more intelligent than humans, which he says “will be as momentous an event as extraterrestrials landing on this planet.”


The fantasy of creating superintelligent computers has tickled technologists since the days of ENIAC. Alan Turing, the father of computer science, ended a 1951 lecture entitled “Intelligent Machinery, a Heretical Theory,” by stating that:

“It seems probable that once [a] machine thinking method [starts], it would not take long to outstrip our feeble powers. There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits. At some stage therefore we should have to expect the machines to take control.”

Fourteen years later, British mathematician I.J. Good first coined the phrase “intelligence explosion” and penned his famous thought experiment in a monograph entitled “Speculations Concerning the First Ultraintelligent Machine”: 

“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”

Beginning in the 1990s, Ray Kurzweil, a notable futurist and now a Director of Engineering at Google, began popularizing his speculations of an impending utopian future of superintelligent machines in his bestsellers The Age of Spiritual Machines: When Computers Exceed Human Intelligence and The Singularity is Near: When Humans Transcend Biology. However, it is Oxford’s Bostrom who succeeded in legitimizing the idea of an intelligence explosion as an actually-existing threat to humanity. A scenario previously confined to science fiction and occasional “what if” speculation was placed into the mainstream of both academic and popular discourse. It wasn’t just an idea. It was coming for us now. Bostrom warned in 2014 that we need to wake up and take action:  

“Picture a school bus accelerating down a mountain road, full of quibbling and carousing kids. That is humanity. But if we look towards the front, we see that the driver’s seat is empty.” 

Armed with charts and foreboding parables, Bostrom and his fellow futurist Max Tegmark have been remarkably successful at seducing audiences with the threat of a cataclysmic intelligence explosion. Between the two of them, they have spoken at the United Nations Headquarters, written bestsellers that made it onto Barack Obama’s “Best of 2018” reading list, been the subject of a 12,000 word New Yorker piece, and made appearances on the podcasts of Intellectual Dark Web “renegades” Joe Rogan and Sam Harris. Research institutes dedicated to studying this existential risk have cropped up over the years, including The Future of Humanity Institute (FHI) Bostrom founded at Oxford in 2005. Musk and Sam Altman founded OpenAI to ensure that artificial general intelligence “benefits all of humanity”; it has since received a billion dollar investment from Microsoft. A host of scientific and technological luminaries have encouraged the studying of an intelligence explosion as an existential threat, including Bill Gates, Stephen Hawking, Yuval Noah Harari, Jaan Tallinn (co-founder of Skype), Tim Berners-Lee (inventor of the World Wide Web), Martin Rees (Astronomer Royal) and 2020 Democratic presidential candidate Andrew Yang. “Every time I went out to Silicon Valley during the campaign, I came home more alarmed about this,” Hillary Clinton wrote in her book What Happened. “My staff lived in fear that I’d start talking about ‘the rise of the robots’ in some Iowa town hall. Maybe I should have.” 


The underlying argument is simple: Over the past century, machines have become increasingly good at accomplishing certain tasks that require “narrow” forms of intelligence: computers are superhuman at chess and Go but are nowhere near proficient at writing Current Affairs articles. And yet, given the observed exponential growth of computing power predicted by “Moore’s Law,” as well as the continual developments in AI, we might conjecture that machines will improve at increasingly generalized tasks over time, leading first to artificial general intelligence (when machines can do anything we can do intellectually) and eventually perhaps to superintelligence (when machines outperform us across the board) via self-improvement. 

It is the prospect of unilaterally smarter machines that is an immense logical jump and is highly contentious within the AI community. In response to the incessant questions about machine superintelligence in the press (e.g. “Will Machines Eliminate Us?” in the MIT Technology Review) leading researchers have often dismissed the idea. Yann LeCun and Yoshua Bengio—two of the pioneers of deep learning, the subfield that has powered most of the recent advances in machine learning—say the issue is of minimal concern. The idea nonetheless easily stokes the imagination. “This is a genie that once it’s out of the bottle, you’re never getting it back in,” Joe Rogan said on The Joe Rogan Experience.

There are disagreements within the singularity discourse about when superintelligence will occur and how it will happen. But skeptics are offered a 21st century revival of Pascal’s wager: even if there is an infinitesimal chance of superintelligence occurring in the foreseeable future and eliminating humanity, then it should still be the most important issue of our time, because if it did happen, it would be so much worse than anything else that could happen. It’s easy to see how this kind of utilitarian provocation could lead to a kind of obsessive fear of something that one has very little evidence of actually happening. Are there good evidence-based reasons to think we are on the brink of this, and that this exponential growth will happen? Well, it doesn’t matter very much, because what if it did? And so what begins in rationalism ends in faith. Renowned atheist Sam Harris says “We have to admit that we are in the process of building some sort of god.” We have to. 

“Many of the points made in this book are probably wrong,” Bostrom tells us in the preface of his 2014 bestseller Superintelligence: Paths, Dangers, Strategies, perhaps the single most influential work in the singularity discourse. The book is full of counterfactuals and conditional forecasts, and Bostrom says that he has “gone to some length to indicate nuances and degrees of uncertainty throughout the text — encumbering it with an unsightly smudge of “possibly,”“might,” “may,” “could well,” “it seems,” “probably,” “very likely,” “almost certainly,” with  “each qualifier …  placed where it is carefully and deliberately.” But these smudges can serve to insulate him from criticism, and once gripped by the fear that his nightmare scenario is designed to elicit, the qualifying “smudges” fade from view.

Bostrom’s own background is in the intellectual movement of transhumanism, which he defines as “an interdisciplinary approach to understanding and evaluating the opportunities for enhancing the human condition and the human organism opened up by the advancement of technology.” The movement has long been popular among the techno-libertarians of Silicon Valley, and many transhumanists—including Bostrom, Ray Kurzweil, and Peter Thiel—openly hope for radical life extension via technological means such as artificial intelligence, biological enhancement, and brain-computer interfaces. In 1998, Bostrom co-founded the World Transhumanist Association, now known as Humanity+ (which happens to have received multiple donations from eugenicist and pedophile Jeffrey Epstein). Though Bostrom no longer describes himself as a transhumanist, he has championed many ideas central to the movement in his research. Papers with a transhumanist flavor that have been authored or co-authored by Bostrom include “Embryo Selection for Cognitive Enhancement: Curiosity or Game-changer?”, “Why I Want to be a Posthuman When I Grow Up,” “Whole Brain Emulation: A Technical Roadmap,” and “Are We Living in a Computer Simulation?

The line between transhumanism and eugenics is not always completely clear; Bostrom’s conception of superintelligence includes biological methods of enhancement in addition to machine intelligence. In the second chapter of Superintelligence, Bostrom asks us to suspend disbelief and entertain paths to superintelligence by way of whole brain emulations and iterated embryo selections in pursuit of “…if one wished to speak provocatively… less distorted expressions of human form,” aligning with his belief in “human enhancement,” the transhumanist euphemism for non-coercive eugenics. “The eugenics movement as a whole, in all its forms, became discredited because of the terrible crimes that had been committed in its name,” Bostrom rued in a 2005 paper on transhumanist thought. Though often overlooked by the press when discussing his work, Bostrom has published extensively on human enhancement, drawing conclusions such as “there are no compelling reasons to resist the use of genetic intervention to select the best children.” 

Indeed, in Superintelligence, Bostrom describes a viable path to superintelligence in which “social elites” gain first access to biological enhancement mechanisms and inspire a “culture shift” among others. “Many of the initially reluctant might join the bandwagon in order to have a child that is not at a disadvantage relative to the enhanced children of their friends and colleagues,” says Bostrom. It is not always obvious whether we are hearing of a dystopia to be avoided at all costs or something considered logical and not to be fought against. Bostrom has written

“One can speculate that some technologies may cause social inequalities to widen…. Trying to ban technological innovations on these grounds would be misguided. If a society judges these inequalities to be unacceptable, it would be wiser for that society to increase wealth redistribution, for example by means of taxation and the provision of free services (education vouchers, IT access in public libraries, genetic enhancements covered by social security etc.).” 

Central to the singularity discourse is the persistent trope about technology that machines will either liberate us or eviscerate us, bifurcating our collective future into rapture or ruin. And so the singularity technocrats speak in dichotomies: Max Tegmark and Elon Musk tell us that AI development may be “either the best or the worst thing ever to happen to humanity.” (It’s not possible it could be neither?) Stuart Russell posits in his 2019 book Human Compatible: Artificial Intelligence and the Problem of Control that:

“If we succeed in creating provably beneficial AI systems, we would eliminate the risk that we might lose control over superintelligent machines…. Humanity could proceed with their development and reap the almost unimaginable benefits…. We would be released from millennia of servitude…. Or perhaps not. Bondian villains may circumvent our safeguards and unleash uncontrollable superintelligences against which humanity has no defense. And if we survive that, we may find ourselves gradually enfeebled as we entrust more and more of our knowledge and skills to machines.”

It is not exactly nuanced. And makes it difficult to have serious and careful conversations. Discussion of technology’s relationship to the social and economic structure is a;sp conspicuously absent. Russell, one of the more measured singularity technocrats (who believes that whole brain emulations are “obviously a bad idea”)  devotes only a handful of pages in his 300-page Human Compatible to the topic of algorithmic bias, the encoding of human bias such as racism into machine learning systems via training data. But algorithmic bias is extremely important: if existing inequities are incorporated and amplified by algorithms, injustice will become that much more entrenched and hard to eradicate. 

Bostrom and Tegmark refuse to engage at all with the long history of technology as a marginalizing force from the perspective of race or gender. Tegmark asks in his 2017 book Life 3.0: Being Human in the Age of Artificial Intelligence:

“…if AI-assisted brain scanning technology became commonplace in courtrooms, the currently tedious process of establishing the facts of a case could be dramatically simplified and expedited, enabling faster trials and fairer judgments. But privacy advocates might worry…. Where would you draw the line?”

The question is not just one of privacy, though. It is also about whose brains will be the ones being scanned, and how these supposedly “fairer” judgments are made. The introduction of aggressively invasive technology in an unequal world threatens some classes of people more than others. 

Likewise, Yuval Noah Harari goads us with a similar question in Homo Deus: A Brief History of Tomorrow: “What will be the fate of all these lawyers once sophisticated search algorithms can locate more precedents in a day than a human can in a lifetime, and once brain scans can reveal lies and deceptions at the press of a button?” To Tegmark and Harari, the main ethical question around machine jurism are around projected massive job loss at the hands of automation. The fate of the lawyers, but not the fate of the underclass who would be processed and judged by this robo-court system. 

Indeed, some of these prophets almost seem to be fantasizing about life in the Orwellian future. Harari dreams up a Google that:

“watches every breath you take, every move you make, and every bond you break; a system that monitors your bank account and your heartbeat, your sugar levels and your sexual escapades. It will definitely know you much better than you know yourself…. Many of us would be happy to transfer much of our decision-making processes into the hands of such a system.

Many of us, yes, but others of us will have our decision-making process thrust into the hands of such a system against our will—employers will demand it as a condition of employment. There is, in these predictions, a wholehearted whitewashing of the exploitative monetization of our personal lives at the hands of surveillance capitalism. Already, our biometrics are monetized by insurance companies and our sexual activity is pawned off to menstrual tracking apps. The crucial question is not just: how much will the machines know, but whose interests will they serve? 

Art by Christopher Matthews

It’s unsurprising that technocratic capitalism is left untouched within the singularity discourse, given that it is dominated by techno-libertarians—mostly white, mostly male. In fact, one of the greatest ironies of the singularity discourse is that many of its arguments read more effectively as critiques of capitalism itself. One of Bostrom’s most famous arguments is his “paperclip maximizer” AI, which is given the sole mandate of producing as many paperclips as possible. But how do we specify its utility function to prevent it from depleting our energy sources in order to continue making paper clips, or destroying the humans who tried to prevent it from continuing to make paper clips? It is a titillating thought experiment, though it amounts to little more than a recycling of King Midas’s classic wish-gone-awry, where the desire to turn everything to gold upon touch presents unexpected challenges for alimentation and intimacy—a tale of human greed. Bostrom introduced the thought experiment to illustrate the dangers intrinsic to the fact that “artificial intellects need not have humanlike motives.” But the thought experiment rings truer as a critique of the profit maximization and corporate greed under capitalism, and their role in fueling climate change. If we create an institution that is solely to maximize profits, but maximizing profits means destroying the world, we clearly need to rewrite our program. Bostrom prods us to consider a paperclip AI harvesting metal from our blood in order to continue to pursue its deadly mandate, but the thought is nowhere as fear-inspiring as the recent fires in Australia.


“There are various principles and norms, which are currently deeply entrenched and often endorsed without qualification, that would need to be examined afresh in a context of radical AI,” wrote Bostrom in a 2018 whitepaper with his Future of Humanity Institute colleagues Allan Dafoe and Carrick Flynn. In the singularity discourse, democratic ideals fit the bill. Harari asserts in Homo Deus:

“Liberal habits such as democratic elections will become obsolete, because Google will be able to represent even my own political opinions better than I can.”

Bostrom and his co-authors concur:

“It is possible that the epistemic value of letting political decisions be influenced by many human opinions would be reduced or eliminated if superintelligent AI were sufficiently epistemically superior to humans.”

To Bostrom and Harari, the right machine is even a cure for an ignorant electorate. 

In fact, the most nightmarish increases in state power, aided by technology, might be justified in the name of stopping the wrong machine. In a 2018 paper, Bostrom presented his Vulnerable World Hypothesis, which calls for “greatly amplified capacities for preventive policing and global governance” in order to monitor AI research and combat nefarious actors from developing superintelligence. One proposed solution is Bostrom’s “High-tech panopticon”: 

“Everybody is fitted with a “freedom tag”—a sequent to the more limited wearable surveillance devices familiar today… worn around the neck and bedecked with multidirectional cameras and microphones. Encrypted video and audio is continuously uploaded from the device to the cloud and machine-interpreted in real time. AI algorithms classify the activities of the wearer, his hand movements, nearby objects, and other situational cues. If suspicious activity is detected, the feed is relayed to one of several patriot monitoring stations. These are vast office complexes, staffed 24/7…. Citizens are not permitted to remove the freedom tag, except while they are in environments that have been outfitted with adequate external sensors.”

Bostrom takes his time enumerating what wonders might be accomplished by such a system: “many forms of crime could be nearly eliminated… It might also generate growth in many beneficial cultural practices that are currently inhibited by a lack of social trust.” He even calculates its price (less than 1% of the world’s GDP). He leaves the downsides to a footnote, saying “The Orwellian-sounding name [of the freedom tag] is of course intentional, to remind us of the full range of ways in which such a system could be applied.” In a Wired interview, Bostrom said that he was “not sure every reader got the sense of irony,” but he had little interest in enumerating potential abuses or thinking about how different populations interact with the criminal punishment system. 


The anthropomorphization of machines has long been ingrained in the vocabulary of machine learning, from “neural networks” to the very concept of a machine “learning.” But the singularity discourse doubles down; to join the debate is to enter a world in which machines are human, and we are machines. Stuart Russell describes our collective past as “agricultural, industrial, and clerical robots”; Tegmark entertains a world in which self-driving cars are not only liable for lawsuits but also “are allowed to hold insurance policies,” “own money and property,” “[make] money on the stock market,” and “buy online services.” “If you’re OK with granting machines the rights to own property, then how about granting them the right to vote?” Tegmark asks. The question of machine rights is thus not far behind.

Indeed, for Tegmark, certain utopian futures for humans are “biased against non-human intelligence: the robots that perform virtually all the work appear to be rather intelligent, but are treated as slaves, and people appear to take for granted that they have no consciousness and should have no rights.” To disabuse us of the notion that machines should not have rights, Tegmark devotes an entire section of Life 3.0 to explaining “How Slave Owners Justify Slavery,” to show that the things that will be said to justify denying machines rights were also once said of Blacks.

“Once upon a time, the white population in the American South ended up better off because the slaves did much of their work, but most people today view it as morally objectionable to call this progress.”

In a 2017 Future of Life Institute panel, Kurzweil, too, expressed worry over machine rights: “if we create [artificial general intelligence], everybody assumes that it’s just a machine, and therefore, it’s not conscious, but actually it is suffering.” Likewise, in the section “Voluntary Slavery, Casual Death” of Superintelligence, Bostrom pontificates whether “working machine minds are owned as capital (slaves) or are hired as free wage laborers”; we must “consider the plight of the working-class machine.”

Indeed, the singularitarians frame the question of machine rights as a question of immense philosophical and ethical consequence. But what about the rights of people under a “high-tech panopticon” or machine jurism? By spilling more ink in expressing genuine, nuanced concern for the hypothetical rights of machines than for the rights of people, the singularitarians suffocate questions of marginalization. It is the culmination of incessant anthropomorphization of machines within the singularity discourse.


We live in the age of artificial intelligence, we are told, which means that we live in an age of endless hagiographies of the “intelligent” machine. The keynotes and op-eds speak of smarter and smarter machines that will replace our doctors and compose us music, free us from toil and enrich every facet of our lives. And thus, we are constantly goaded with an implicit question: what happens if these machines keep getting smarter?

In the singularity discourse, the question is rendered explicit. And it is a question that continues to pick up steam, steadily accruing followers and attention. “Just had a call with Nick Bostrom who schooled me on AI issues of the future. We have a lot of work to do,” Andrew Yang tweeted last March. “It’s important that Neuralink solves this problem [of a brain-computer interface] sooner rather than later, because the point at which we have digital superintelligence, that’s when we pass the singularity and things become just very uncertain,” said Elon Musk in an interview last July about his company Neuralink—valued at half a billion dollars—that is designing implantable brain-computer interfaces. The language of superintelligence is here to stay.

So why does the discourse continue to spread? Certainly, the singularity indulges technologists’ fantasies and attracts publicity with alarmist headlines. But I think there is more to it. In her incisive 2018 book Algorithms of Oppression: How Search Engines Reinforce Racism, Safiya Umoja Noble dissects the rise of our reliance on machines for decision-making:

“I often challenge audiences who come to my talks to consider that at the very historical moment when structural barriers to employment were being addressed legislatively in the 1960s, the rise of our reliance on modern technologies emerged, positing that computers could make better decisions than humans. I do not think it a coincidence that when women and people of color are finally given the opportunity to participate in limited spheres of decision making in society, computers are simultaneously celebrated as a more optimal choice for making social decisions. The rise of big-data optimism is here, and if ever there were a time when politicians, industry leaders, and academics were enamored with artificial intelligence as a superior approach to sense-making, it is now.”

I would posit that an analogous phenomenon can be observed within the singularity discourse. It is no coincidence that as machine learning continues to seep further and further into the tapestry of our lives, the rhetoric of superintelligence has caught on in full force among the technocratic elite. As we begin to hold the tech executives, managers, researchers, and engineers accountable for an ever-increasing list of transgressions, they have been abstracting away responsibility by positioning the machines as our greatest enemy, rendered rhetorically just human enough to bear the blame. The legitimate concerns of marginalized and exploited groups surrounding machine learning are drowned out by the universal danger of a machine intelligence explosion, where the Google executive and the Mechanical Turk worker are equally at risk. When we push back against the rhetoric, we are patronized. “Only a few of us seem to be in a position to think this question through,” says Sam Harris. And yet, ironically, embedded within the singularity discourse is also an unrelenting faith in the machine —the persistent optimism of Silicon Valley that if we just solve the singular problem of aligning the machines with our own interests, we will be delivered into a utopian vision of the future. It is this recurring dichotomy of fear and deliverance that transmutes the narratives of those at the heart of the machine learning technocracy from posturing opportunists into heroes. They are our singularity prophets, divining with machines.

More In: Tech

Cover of latest issue of print magazine

Announcing Our Newest Issue

Featuring

Celebrating our Ninth Year of publication! Lots to stimulate your brain with in this issue: how to address the crisis of pedestrian deaths (hint: stop blaming cars!), the meaning of modern art, is political poetry any good?, and the colonial adventures of Tinin. Plus Karl Marx and the new Gorilla Diet!

The Latest From Current Affairs