When smartphones first burst onto the scene, the growth and improvements with every new generation was tremendous. These days, the improvements have slowed to a crawl. AI faces a similar problem known as “peak data,” but researchers at Google DeepMind seem to have found a way around it.
What is “peak data”?
The concept of “peak data” is similar to how technology advances. At the start when things are new and largely undiscovered, the gains with each new generation are huge. But as things start to mature, those gains become smaller.
Peak data is essentially the same. All the so-called “useful data” on the internet has already trained AI models. OpenAI cofounder Ilya Sutskever said during a recent conference, “We’ve achieved peak data, and there’ll be no more.”He also suggested that this era of improvements “will unquestionably end”.
Considering the billions of dollars many companies have pumped into the technology, it sounds rather frightening. But it seems that Google DeepMind researchers might have figured out a way to solve the problem.
Google’s solution
The researchers believe they can overcome this problem by changing the way AI models “think”. This involves an approach known as inference-time compute. It’s where a query is split into smaller tasks, with each task acting like its own prompt. This means that instead of approaching the initial query as a whole, the AI model will break down the query into smaller tasks, process those tasks one at a time, and only move onto the next task once it gets each part right.
You could think of this as following a cooking recipe. Creating a dish involves many steps. But rather than doing everything at once, you break down the process into individual tasks. You peel the garlic first, then mince it. After that, you move on to the onions, followed by the carrots, and so on.
The researchers at Google DeepMind published a research paper on their approach back in August and found that it had the potential to overcome the problem of AI peak data. But is it the perfect solution? Not exactly.
According to Charlie Snell, one of the researchers who contributed to the research, inference-time computing works with questions that have a clear-cut answer, such as a math challenge. For other queries that require reasoning, it won’t be as straightforward. The bright side is that there are early signs of success, so maybe there is some hope.