America, you have spoken loud and clear: You do not like AI.
A Pew Research Center survey published in September found that 50 percent of respondents were more concerned than excited about AI; just 10 percent felt the opposite. Most people, 57 percent, said the societal risks were high, while a mere 25 percent thought the benefits would be high. In another poll, only 2 percent — 2 percent! — of respondents said they fully trust AI’s capability to make fair and unbiased decisions, while 60 percent somewhat or fully distrusted it. Standing athwart the development of AI and yelling “Stop!” is quickly emerging as one of the most popular positions on both ends of the political spectrum.
Putting aside the fact that Americans sure are actually using AI all the time, these fears are understandable. We hear that AI is stealing our electricity, stealing our jobs, stealing our vibes, and if you believe the warnings of prominent doomers, potentially even stealing our future. We’re being inundated with AI slop — now with Disney characters! Even the most optimistic takes on AI — heralding a world of all play and no work — can feel so out-of-this-world utopian that they’re a little scary too.
Our contradictory feelings are captured in the chart of the year from the Dallas Fed forecasting how AI might affect the economy in the future:
Red line: AI singularity and near-infinite money. Purple line: AI-driven total human extinction and, uh, zero money.
But I believe part of the reason we find AI so disquieting is that the disquieting uses — around work, education, relationships — are the ones that have gotten most of the attention, while pro-social uses of AI that could actually help address major problems tend to go under the radar. If I wanted to change people’s minds about AI, to give them the good news that this technology would bring, I would start with what it could do for the foundation of human prosperity: scientific research.
We really need better ideas
But before I get there, here’s the bad news: There’s growing evidence that humanity is generating fewer new ideas. In a widely cited paper with the extremely unsubtle title “Are Ideas Getting Harder to Find?” economist Nicholas Bloom and his colleagues looked across sectors from semiconductors to agriculture and found that we now need vastly more researchers and R&D spending just to keep productivity and growth on the same old trend line. We have to row harder just to stay in the same place.
Inside science, the pattern looks similar. A 2023 Nature paper analyzed 45 million papers and nearly 4 million patents and found that work is getting less “disruptive” over time — less likely to send a field off in a promising new direction. Then there’s the demographic crunch: New ideas come from people, so fewer people eventually means fewer ideas. With fertility in wealthy countries below replacement levels and global population likely to plateau and then shrink, you move toward an “empty planet” scenario where living standards stagnate because there simply aren’t enough brains to push the frontier. And if, as the Trump administration is doing, you cut off the pipeline of foreign scientific talent, you’re essentially taxing idea production twice.
One major problem here, ironically, is that scientists have to wade through too much science. They’re increasing drowning in data and literature that they lack the time to parse, let alone use in actual scientific work. But those are exactly the bottlenecks AI is well-suited to attack, which is why researchers are coming around to the idea of “AI as a co-scientist.”
Professor AI, at your service
The clearest example out there is AlphaFold, the Google DeepMind system that predicts the 3D shape of proteins from their amino-acid sequences — a problem that used to take months or years of painstaking lab work per protein. Today, thanks to AlphaFold, biologists have high-quality predictions for essentially the entire protein universe sitting in a database, which makes it much easier to design the kind of new drugs, vaccines, and enzymes that help improve health and productivity. AlphaFold even earned the ultimate stamp of science approval when it won the 2024 Nobel Prize for chemistry. (Okay, technically, the prize went to AlphaFold creators Demis Hassabis and John Jumper of DeepMind, as well as the computational biologist David Baker, but it was AlphaFold that did much of the hard work.)
Or take material science, ie., the science of stuff. In 2023, DeepMind unveiled GNoME, a graph neural network trained on crystal data that proposed about 2.2 million new inorganic crystal structures and flagged roughly 380,000 as likely to be stable — compared to only about 48,000 stable inorganic crystals that humanity had previously confirmed, ever. That represented hundreds of years worth of discovery in one shot. AI has vastly widened the search for materials that could make cheaper batteries, more efficient solar cells, better chips, and stronger construction materials.
If we’re serious about making life more affordable and abundant — if we’re serious about growth — the more interesting political project isn’t banning AI or worshipping it.
Or take something that affects everyone’s life, every day: weather forecasting. DeepMind’s GraphCast model learns directly from decades of data and can spit out a global 10-day forecast in under a minute, doing it much better than the gold-standard models. (If you’re noticing a theme, DeepMind has focused more on scientific applications than many of its rivals in AI.) That can eventually translate to better weather forecasts on your TV or phone.
In each of these examples, scientists can take a domain that is already data-rich and mathematically structured — proteins, crystals, the atmosphere — and let an AI model drink from a firehose of past data, learn the underlying patterns, and then search enormous spaces of “what if?” possibilities. If AI elsewhere in the economy seems mostly focused around replacing parts of human labor, the best AI in science allows researchers to do things that simply weren’t possible before. That’s addition, not replacement.
The next wave is even weirder: AI systems that can actually run experiments.
One example is Coscientist, a large language model-based “lab partner” built by researchers at Carnegie Mellon. In a 2023 Nature paper, they showed that Coscientist could read hardware documentation, plan multistep chemistry experiments, write control code, and operate real instruments in a fully automated lab. The system actually orchestrates the robots that mix chemicals and collect data. It’s still early and a long way from a “self-driving lab,” but it shows that with AI, you don’t have to be in the building to do serious wet-lab science anymore.
Then there’s FutureHouse, which isn’t, as I first thought, some kind of futuristic European EDM DJ, but a tiny Eric Schmidt-backed nonprofit that wants to build an “AI scientist” within a decade. Remember that problem about how there’s simply too much data and too many papers for any scientists to process? This year FutureHouse launched a platform with four specialized agents designed to clear that bottleneck: Crow for general scientific Q&A, Falcon for deep literature reviews, Owl for “has anyone done X before?” cross-checking, and Phoenix for chemistry workflows like synthesis planning. In their own benchmarks and in early outside write-ups, these agents often beat both generic AI tools and human PhDs at finding relevant papers and synthesizing them with citations, performing the exhausting review work that frees human scientists to do, you know, science.
The showpiece is Robin, a multiagent “AI scientist” that strings those tools together into something close to an end-to-end scientific workflow. In one example, FutureHouse used Robin to tackle dry age-related macular degeneration, a leading cause of blindness. The system read the literature, proposed a mechanism for the condition that involved many long words I can’t begin to spell, identified the glaucoma drug ripasudil as a candidate for a repurposed treatment, and then designed and analyzed follow-up experiments that supported its hypothesis — all with humans executing the lab work and, especially, double-checking the outputs.
Put the pieces together and you can see a plausible near-future where human scientists focus more on choosing good questions and interpreting results, while an invisible layer of AI systems handles the grunt work of reading, planning, and number-crunching, like an army of unpaid grad students.
We should use AI for the things that actually matter
Even if the global population plateaus and the US keeps making it harder for scientists to immigrate, abundant AI-for-science effectively increases the number of “minds” working on hard problems. That’s exactly what we need to get economic growth going again: instead of just hiring more researchers (a harder and harder proposition), we make each existing researcher much more productive. That ideally translates into cheaper drug discovery and repurposing that can eventually bend health care costs; new battery and solar materials that make clean energy genuinely cheap; better forecasts and climate models that reduce disaster losses and make it easier to build in more places without getting wiped out by extreme weather.
As always with AI, though, there are caveats. The same language models that can help interpret papers are also very good at confidently mangling them, and recent evaluations suggest they overgeneralize and misstate scientific findings a lot more than human readers would like. The same tools that can accelerate vaccine design can, in principle, accelerate research on pathogens and chemical weapons. If you wire AI into lab equipment without the right checks, you risk scaling up not only good experiments but also bad ones, faster than humans can audit them.
When I look back on the Dallas Fed’s now-internet-famous chart where the red line is “AI singularity: infinite money” and the purple line is “AI singularity: extinction,” I think the real missing line is the boring-but-transformative one in the middle: AI as the invisible infrastructure that helps scientists find good ideas faster, restart productivity growth, and quietly make key parts of life cheaper and better instead of weirder and scarier.
The public is right to be anxious about the ways AI can go wrong; yelling “stop” is a rational response when the choices seem to be slop now or singularity/extinction later. But if we’re serious about making life more affordable and abundant — if we’re serious about growth — the more interesting political project isn’t banning AI or worshipping it. Instead, it means insisting that we point as much of this weird new capability as possible at the scientific work that actually moves the needle on health, energy, climate, and everything else we say we care about.
This series was supported by a grant from Arnold Ventures. Vox had full discretion over the content of this reporting.
A version of this story originally appeared in the Good News newsletter. Sign up here!
You’ve read 1 article in the last month
Here at Vox, we’re unwavering in our commitment to covering the issues that matter most to you — threats to democracy, immigration, reproductive rights, the environment, and the rising polarization across this country.
Our mission is to provide clear, accessible journalism that empowers you to stay informed and engaged in shaping our world. By becoming a Vox Member, you directly strengthen our ability to deliver in-depth, independent reporting that drives meaningful change.
We rely on readers like you — join us.
Swati Sharma
Vox Editor-in-Chief

