AI’s Roadblock: Transformers Aren’t Good at Generalizing
The Reality Check for AI’s Ambitions
Google researchers have revealed a major reality check for CEOs racing to achieve artificial general intelligence (AGI). In a recent pre-print paper submitted to ArXiv, the researchers found that transformers, the technology behind large language models (LLMs) like ChatGPT, struggle with generalizing. They are effective at tasks related to their training data but perform poorly when faced with out-of-domain tasks. This poses a problem for those seeking AGI, as AI lacks the ability to transfer skills across domains like humans do. Experts suggest that expectations for imminent AGI should be tempered.
Transformers and Artificial General Intelligence
The field of AI distinguishes AGI as the ultimate goal, representing the creation of AI as intelligent as or even surpassing humans. However, if transformers, a crucial technology in large language models, struggle with simple extrapolation tasks, achieving AGI seems distant. The limitations of LLMs highlighted in this paper have led to a reevaluation of their capabilities, dispelling the belief that they are a path towards AGI. The need for AI to generalize across various tasks, adapt to unfamiliar scenarios, create analogies, process new information, and think abstractly remains unfulfilled.
The Confusion Surrounding Transformers
Despite the advancement of transformers, there is a discrepancy between the perceived and actual power of these models. Neural networks, like transformers, lack transparency, leading to misconceptions about their capabilities. The sheer amount of training data used in LLMs has further contributed to the confusion. People mistakenly believe that transformers can perform miracles. However, experts argue that more advanced forms of AI may demonstrate better generalization abilities than transformers.
Looking Ahead: The Role of AI Models
While the findings from this research may seem concerning, some AI industry leaders remain optimistic. Some argue that transformers, although imperfect, still serve valuable purposes and can be guided and aligned. Others believe that focusing on training models instead of solely querying them may overcome the limitations identified in this study. As the AI field continues to evolve, future advancements and innovations may pave the way for the development of AGI.
Source: Google researchers deal a major blow to the theory AI is about to outsmart humans