Android

Ex-Google CEO warns of the danger of fast AI development


Today, artificial intelligence has become part of our daily lives, even unconsciously. The sector has advanced at a speed that took many by surprise. However, Eric Schmidt, ex-Google CEO, warns that this level of advancement in AI can even be dangerous for humans. He also recalls China’s advances in industry.

AI models are the “secret” behind your favorite AI-powered services

It is not common for a new technology to establish itself as fundamental in the tech industry. What AI-powered platforms and services can do today seemed unthinkable just a few years ago. Rapid development has helped us enjoy advanced features that make our daily lives easier. You can make complex edits to images with a few prompts or taps or understand a foreign language practically in real time, among many other things.

The bases of AI developments are the models, labeled differently according to their size. There are LLMs (large language models) and SLMs (small language models). Large language models (LLMs) rely on large data centers equipped with hundreds or thousands of AI chips and GPUs to handle the AI-based tasks of an organization. On the other hand, SLMs enable on-device AI tasks on mobile devices or laptops, consuming far fewer resources. Gemini Pro and Gemini Ultra are examples of LLMs, while Gemini Nano is an SLM.

AI platforms capable of self-improvement are dangerous, says ex-Google CEO

That said, the potential danger lies in LLMs, says Eric Schmidt. The ex-Google CEO referred to the speed at which AI has been evolving. “We’re soon going to be able to have computers running on their own, deciding what they want to do,” said Schmidt. A couple of months ago, we reported on a Meta AI capable of training other AIs. You can use that case as a current example, even though it doesn’t yet reach the level of what Schmidt exposes.

Specifically, the former CEO of Google refers to when we reach the point where AIs are able to “self-improve” on a large scale. “In theory, we better have somebody with the hand on the plug,Schmidt warned. Industry experts estimate that the most powerful AI platforms will be able to work at the level of a Ph.D. student as soon as next year. Schmidt himself believes that AI platforms could begin to carry out research on their own next year.

In the context of an AI getting out of control, Schmidt claims that only another AI could stop it. Basically, along with every huge AI platform, there must be another one designed to keep it under control. “AI systems should be able to police AI,” he said.

Concerns about AI power in the wrong hands

Eric Schmidt is not only concerned about the potential danger of an AI evolving too much by itself. He also wonders what an individual could do with the power of advanced AI at their disposal. “We just don’t know what it means to give that kind of power to every individual,” he said.

This is not a particularly new view in the industry. In early 2023, prominent names such as Elon Musk and Apple co-founder Steve Wozniak—among others—called for a pause or delay in artificial intelligence advances. They said that the development of AI at such levels should wait until “we are confident that their effects will be positive and their risks will be manageable.”

China is closing the gap with the US in AI development, says ex-Google CEO

The former Google CEO also assessed the risks of China’s advances in the field of AI. In the past, he thought that the United States remained ahead in the segment by a long way. However, the Chinese AI industry has grown in the last six months “in a way that is remarkable.”

Schmidt believes that it is “crucial that America wins this race, globally, and in particular, ahead of China.” He sees it as unlikely that AI companies will stop or pause their development at this point. Therefore, he advocates for taking all necessary steps to guarantee the victory of the West. Schmidt is referring to providing all the necessary support and funding, whether private or state.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.