To put it simply, when it comes to creating extremely clever AI, more is more. Large investments in storage facilities that enable artificial intelligence models to derive information from vast amounts of existing data have been spurred by this concept. However, AI specialists in Silicon Valley have recently begun to question that theory.
LeCun’s argument rests on the notion that super intelligence won’t be produced by training AI on enormous volumes of fundamental data, such as internet traffic. Intelligent AI is a distinct species.
He also stated that
“The mistake is that very simple systems, when they work for simple problems, people extrapolate them to think that they’ll work for complex problems. Further, they do some amazing things, but that creates a religion of scaling that you just need to scale systems more, and they’re going to naturally become more intelligent.”
Since many of the most recent advances in AI are actually “really easy,” LeCun stated that the impact of scaling is currently amplified. According to him, the largest massive language models in use today are trained using around the same quantity of data as a four-year-old’s visual cortex. The development of AI has slowed recently. This is partly because there is less and less useful public data available.
The strength of scaling has been questioned by several well-known researchers other than LeCun. During last year’s Cerebral Valley conference, Scale AI CEO Alexandr Wang stated that scalability is
“the biggest challenge in the industry.”
Aidan Gomez, CEO of Cohere, referred to it as the “dumbest” method of enhancing AI models.
Administrator