Site icon Smart Again

Is AI progress slowing down?

Is AI progress slowing down?


Last month, tech outlet The Information reported that OpenAI and its competitors are switching strategies as the rate of improvement of AI has dramatically slowed. For a long time, you’ve been able to make AI systems dramatically better across a wide range of tasks just by making them bigger.

Why does this matter? All kinds of problems that were once believed to require elaborate custom solutions turned out to crumble in the face of greater scale. We have applications like OpenAI’s ChatGPT because of scaling laws. If that’s no longer true, then the future of AI development will look a lot different — and potentially a lot less optimistic — than the past.

This reporting was greeted with a chorus of “I told you so” from AI skeptics. (I’m not inclined to give them too much credit, as many of them have definitely predicted 20 of the last two AI slowdowns.) But getting a sense of how AI researchers felt about it was harder.

Over the last few weeks, I pressed some AI researchers in academia and industry on whether they thought The Information’s story captured a real dynamic — and if so, how it would change the future of AI going forward.

The overall answer I’ve heard is that we should probably expect the impact of AI to grow, not shrink, over the next few years, regardless of whether naive scaling is indeed slowing down. That’s effectively because when it comes to AI, we already have an enormous amount of impact that’s just waiting to happen.

There are powerful systems already available that can do a lot of commercially valuable work — it’s just that no one has quite figured out many of the commercially valuable applications, let alone put them into practice.

It took decades from the internet’s birth to transform the world, and it might take decades for AI also (Maybe — many people on the cutting edge of this world are still very insistent that in only a few years, our world will be unrecognizable.)

The bottom line: If greater scale no longer gives us greater returns, that’s a big deal with serious implications for how the AI revolution will play out, but it’s not a reason to declare the AI revolution canceled.

Most people kind of hate AI while kind of underrating it

Here’s something those in the artificial intelligence bubble may not realize: AI is not a popular new technology, and it’s actually getting less popular over time.

I’ve written that I think it poses extreme risks, and many Americans agree with me, but also many people dislike it in a much more mundane way.

Its most visible consequences so far are unpleasant and frustrating. Google Image results are full of awful low-quality AI slop instead of the cool and varied artwork that used to appear. Teachers can’t really assign take-home essays anymore because AI-written work is so widespread, while for their part many students have been wrongly accused of using AI when they didn’t because AI detection tools are actually terrible. Artists and writers are furious about the use of our work to train models that will then take our jobs.

A lot of this frustration is very justified. But I think there’s an unfortunate tendency to conflate “AI sucks” with the idea that “AI isn’t that useful.” The question “what is AI good for?” is a popular one, even though in fact the answer is that AI is already good for an enormous number of things and new applications are being developed at a breathtaking pace.

I think at times our frustration with AI slop and with the carelessness with which AI has been developed and deployed can spill over into underrating AI as a whole. A lot of people eagerly pounced on the news that OpenAI and competitors are struggling to make the next generation of models even better, and took it as proof that the AI wave was all hype and will be followed by bitter disappointment.

Two weeks later, OpenAI announced the latest generation models, and sure enough they’re better than ever. (One caveat: It’s hard to say how much of the improvement comes from scale as opposed to from the many other possible sources of improvement, so this doesn’t mean that the initial Information reporting was wrong).

It’s fine to dislike AI. But it’s a bad idea to underrate it. And it’s a bad habit to take each hiccup, setback, limitation, or engineering challenge as reason to expect the AI transformation of our world to come to a halt — or even to slow down.

Instead, I think the better way to think about this is that, at this point, an AI-driven transformation of our world is definitely going to happen. Even if larger models than those which exist today are never trained, existing technology is sufficient for large-scale disruptive changes. And reasonably often when a limitation crops up, it’s prematurely declared totally intractable … and then solved in short order.

After a few go-rounds of this particular dynamic, I’d like to see if we can cut it off at the pass. Yes, various technological challenges and limitations are real, and they prompt strategic changes at the large AI labs and shape how progress will play out in the future. No, the latest such challenge doesn’t mean that the AI wave is over.

AI is here to stay, and the response to it has to mature past wishing it would go away.

A version of this story originally appeared in the Future Perfect newsletter. Sign up here!

You’ve read 1 article in the last month

Here at Vox, we’re unwavering in our commitment to covering the issues that matter most to you — threats to democracy, immigration, reproductive rights, the environment, and the rising polarization across this country.

Our mission is to provide clear, accessible journalism that empowers you to stay informed and engaged in shaping our world. By becoming a Vox Member, you directly strengthen our ability to deliver in-depth, independent reporting that drives meaningful change.

We rely on readers like you — join us.

Swati Sharma

Vox Editor-in-Chief



Source link

Exit mobile version