breaking news
  • Taiwan: 9 Dead, Buildings Tilt, Bridges & Cars Shake In Island's Strongest Earthquake In 25 Yrs
  • Amit Shah to campaign in five TN LS constituencies on Thursday
  • Special module to prepare kids for formal education in UP
  • JFK Airport Taxi dispatchers charged with taking bribes
  • A million simulations show US debt is on an 'unsustainable' path
  • 'This could be 100 times worse than Covid': Bird flu warning from scientists who say HALF of infections with H5N1 in people are fatal

View Details

The South Asian Insider

AI machines aren't 'hallucinating'. But their makers are



Inside the many debates swirling around the rapid rollout of so-called artificial intelligence, there is a relatively obscure skirmish focused on the choice of the word "hallucinate".
This is the term that architects and boosters of generative AI have settled on to characterize responses served up by chatbots that are wholly manufactured, or flat-out wrong. Like, for instance, when you ask a bot for a definition of something that doesn't exist and it, rather convincingly, gives you one, complete with made-up footnotes. "No one in the field has yet solved the hallucination problems," Sundar Pichai, the CEO of Google and Alphabet, told an interviewer recently.
That's true - but why call the errors "hallucinations" at all? Why not algorithmic junk? Or glitches? Well, hallucination refers to the mysterious capacity of the human brain to perceive phenomena that are not present, at least not in conventional, materialist terms. By appropriating a word commonly used in psychology, psychedelics and various forms of mysticism, AI's boosters, while acknowledging the fallibility of their machines, are simultaneously feeding the sector's most cherished mythology: that by building these large language models, and training them on everything that we humans have written, said and represented visually, they are in the process of birthing an animate intelligence on the cusp of sparking an evolutionary leap for our species. How else could bots like Bing and Bard be tripping out there in the ether?
Warped hallucinations are indeed afoot in the world of AI, however - but it's not the bots that are having them; it's the tech CEOs who unleashed them, along with a phalanx of their fans, who are in the grips of wild hallucinations, both individually and collectively. Here I am defining hallucination not in the mystical or psychedelic sense, mind-altered states that can indeed assist in accessing profound, previously unperceived truths. No. These folks are just tripping: seeing, or at least claiming to see, evidence that is not there at all, even conjuring entire worlds that will put their products to use for our universal elevation and education.
Generative AI will end poverty, they tell us. It will cure all disease. It will solve climate change. It will make our jobs more meaningful and exciting. It will unleash lives of leisure and contemplation, helping us reclaim the humanity we have lost to late capitalist mechanization. It will end loneliness. It will make our governments rational and responsive. These, I fear, are the real AI hallucinations and we have all been hearing them on a loop ever since Chat GPT launched at the end of last year.
There is a world in which generative AI, as a powerful predictive research tool and a performer of tedious tasks, could indeed be marshalled to benefit humanity, other species and our shared home. But for that to happen, these technologies would need to be deployed inside a vastly different economic and social order than our own, one that had as its purpose the meeting of human needs and the protection of the planetary systems that support all life.
And as those of us who are not currently tripping well understand, our current system is nothing like that. Rather, it is built to maximize the extraction of wealth and profit - from both humans and the natural world - a reality that has brought us to what we might think of it as capitalism's techno-necro stage. In that reality of hyper-concentrated power and wealth, AI - far from living up to all those utopian hallucinations - is much more likely to become a fearsome tool of further dispossession and despoilation.
I'll dig into why that is so. But first, it's helpful to think about the purpose the utopian hallucinations about AI are serving. What work are these benevolent stories doing in the culture as we encounter these strange new tools? Here is one hypothesis: they are the powerful and enticing cover stories for what may turn out to be the largest and most consequential theft in human history. Because what we are witnessing is the wealthiest companies in history (Microsoft, Apple, Google, Meta, Amazon …) unilaterally seizing the sum total of human knowledge that exists in digital, scrapable form and walling it off inside proprietary products, many of which will take direct aim at the humans whose lifetime of labor trained the machines without giving permission or consent.
This should not be legal. In the case of copyrighted material that we now know trained the models (including this newspaper), various lawsuits have been filed that will argue this was clearly illegal.
Why, for instance, should a for-profit company be permitted to feed the paintings, drawings and photographs of living artists into a program like Stable Diffusion or Dall-E 2 so it can then be used to generate doppelganger versions of those very artists' work, with the benefits flowing to everyone but the artists themselves?
The painter and illustrator Molly Crabapple is helping lead a movement of artists challenging this theft. "AI art generators are trained on enormous datasets, containing millions upon millions of copyrighted images, harvested without their creator's knowledge, let alone compensation or consent. This is effectively the greatest art heist in history. Perpetrated by respectable-seeming corporate entities backed by Silicon Valley venture capital. It's daylight robbery," a new open letter she co-drafted states.
The trick, of course, is that Silicon Valley routinely calls theft "disruption" - and too often gets away with it. We know this move: charge ahead into lawless territory; claim the old rules don't apply to your new tech; scream that regulation will only help China - all while you get your facts solidly on the ground. By the time we all get over the novelty of these new toys and start taking stock of the social, political and economic wreckage, the tech is already so ubiquitous that the courts and policymakers throw up their hands.
We saw it with Google's book and art scanning. With Musk's space colonization. With Uber's assault on the taxi industry. With Airbnb's attack on the rental market. With Facebook's promiscuity with our data. Don't ask for permission, the disruptors like to say, ask for forgiveness. (And lubricate the asks with generous campaign contributions.)
In The Age of Surveillance Capitalism, Shoshana Zuboff meticulously details how Google's Street View maps steamrolled over privacy norms by sending its camera-bedecked cars out to photograph our public roadways and the exteriors of our homes. By the time the lawsuits defending privacy rights rolled around, Street View was already so ubiquitous on our devices (and so cool, and so convenient …) that few courts outside Germany were willing to intervene.
Now the same thing that happened to the exterior of our homes is happening to our words, our images, our songs, our entire digital lives. All are currently being seized and used to train the machines to simulate thinking and creativity. These companies must know they are engaged in theft, or at least that a strong case can be made that they are. They are just hoping that the old playbook works one more time - that the scale of the heist is already so large and unfolding with such speed that courts and policymakers will once again throw up their hands in the face of the supposed inevitability of it all.
It's also why their hallucinations about all the wonderful things that AI will do for humanity are so important. Because those lofty claims disguise this mass theft as a gift - at the same time as they help rationalize AI's undeniable perils.
By now, most of us have heard about the survey that asked AI researchers and developers to estimate the probability that advanced AI systems will cause "human extinction or similarly permanent and severe disempowerment of the human species". Chillingly, the median response was that there was a 10% chance.
How does one rationalize going to work and pushing out tools that carry such existential risks? Often, the reason given is that these systems also carry huge potential upsides - except that these upsides are, for the most part, hallucinatory. Let's dig into a few of the wilder ones.
Hallucination #1: AI will solve the climate crisis
Almost invariably topping the lists of AI upsides is the claim that these systems will somehow solve the climate crisis. We have heard this from everyone from the World Economic Forum to the Council on Foreign Relations to Boston Consulting Group, which explains that AI "can be used to support all stakeholders in taking a more informed and data-driven approach to combating carbon emissions and building a greener society. It can also be employed to reweight global climate efforts toward the most at-risk regions." The former Google CEO Eric Schmidt summed up the case when he told the Atlantic that AI's risks were worth taking, because "If you think about the biggest problems in the world, they are all really hard - climate change, human organizations, and so forth. And so, I always want people to be smarter."