AI's great brain-rot experiment
INNOVATIONFEATURED
Generative AI critics and advocates are both racing to gather evidence that the new technology stunts (or boosts) human thinking powers — but the data simply isn't there yet.
Why it matters: For every utopian who predicts a golden era of AI-powered learning, there's a skeptic who's convinced AI will usher in a new dark age.
Driving the news: A study titled "Your Brain on ChatGPT" out of MIT last month raised hopes that we might be able to stop guessing which side of this debate is right.
The study aimed to measure the "cognitive cost" of using genAI by looking at three groups tasked with writing brief essays — either on their own, using Google search or using ChatGPT.
It found, very roughly speaking, that the more help subjects had with their writing, the less brain activity, or "neural connectivity," they experienced as they worked.
Yes, but: This is a preprint study, meaning it hasn't been peer-reviewed.
It has faced criticism for its design, small size, and its reliance on electroencephalogram (EEG) analysis. And its conclusions are laced with cautions and caveats.
On their own website, the MIT authors beg journalists not to say that their study demonstrates AI is "making us dumber": "Please do not use words like 'stupid', 'dumb', 'brain rot', 'harm', 'damage'.... It does a huge disservice to this work, as we did not use this vocabulary in the paper."
Between the lines: Students who learn to write well typically also learn to think more sharply. So it seems like common sense to assume that letting students outsource their writing to a chatbot will dull their minds.
Sometimes good research will confirm this sort of assumption! But sometimes we get surprised.
Other recent studies have taken narrow or inconclusive stabs at teasing out other dimensions of the "AI rots our brains" thesis — like whether using AI leads to cultural homogeneity, or how AI-assisted learning compares with human teaching.
Earlier this year, a University of Pennsylvania/Wharton School study found that people researching a topic by asking an AI chatbot "tend to develop shallower knowledge than when they learn through standard web search."
The big picture: As AI is rushed into service across society, the world is hungry for scientists to explain how a tool that transforms learning and creation will affect the human brain.
High-speed change makes us crave high-speed answers. But good research takes time — and costs money.
Generative AI is simply too new for us to have any sort of useful or trustworthy scientific data on its impact on cognition, learning, memory, problem-solving or creativity. (Forget "intelligence," which lacks any scientific clarity.)
Society is nevertheless charging ahead with a vast uncontrolled experiment on human subjects — as we have almost always done with previous new waves of technology, from railroads and automobiles to the internet and social media.
Our thought bubble: As tantalizing but risky new tools have come into view, our species has always chosen the "f--k around and find out" door.
Since even fears that AI might destroy humanity haven't been enough to slow down its research and deployment, it seems absurd to think we would tap the brakes just to curtail cognitive debt.
Flashback: Readers with still-functional memories may recall the furor around an Atlantic cover story by Nicholas Carr from 2008 titled "Is Google Making Us Stupid?"
Back then, the fear was that over-reliance on screens and search engines to provide us with quick answers might stunt our ability to acquire and retain knowledge.
But now, in the ChatGPT era, reliance on Google search is being framed by studies like MIT's and Wharton's as a superior alternative to AI's convenient — and sometimes made-up — answers.
The bottom line: In tech, today's fear-inducing novelty usually turns into tomorrow's everyday fixture — and no one in the AI business is going to let preliminary studies slow that transformation.
Source: Axios