Following the dramatic events at OpenAI last week, speculation over the reasons behind the board’s and Ilya Sutskever’s decision to remove CEO Sam Altman.
Although all the facts are yet unknown, there have been rumors that OpenAI researchers have achieved a “breakthrough” in AI that has scared staff members. According to reports from Reuters and The Information, scientists have developed a new method for building potent AI systems and have produced a new model known as Q* (pronounced Q star), which can do math problems at the elementary school level.
Some at OpenAI think this may represent a turning point in the company’s efforts to develop artificial general intelligence, a much-hyped idea that refers to an AI system that is smarter than humans, according to the persons who talked with Reuters. Regarding Q*, the firm declined to comment.
I spoke with some experts to find out how significant any advancement in mathematics and artificial intelligence would actually be, as social media is rife with conjecture and over-the-top hype.
For years, researchers have attempted to train AI models to tackle mathematical problems. Although not very effectively or consistently, language models such as ChatGPT and GPT-4 can do basic arithmetic. According to Wenda Li, an AI lecturer at the University of Edinburgh, “we don’t even have the right architectures or algorithms to be able to solve math problems reliably using AI right now.” Language models, which include transformers (a type of neural network) and deep learning, are very good at identifying patterns, but Li says that’s probably not going to be sufficient on its own.
Theoretically, a machine capable of mathematical reasoning may be trained to perform additional jobs that require the application of pre-existing knowledge, including creating computer code or deriving conclusions from news articles. Because math needs AI models to be able to reason and truly comprehend the material they are working with, it is a very difficult issue.
A generative AI system that could do arithmetic calculations with consistency would need to possess a solid understanding of the precise meanings of certain concepts, which can become rather abstract. According to Katie Collins, a math and AI specialist PhD researcher at the University of Cambridge, many arithmetic issues also call for some degree of multi-step planning. In fact, Meta’s chief AI scientist Yann LeCun stated over the weekend on X and LinkedIn that he believes Q* to be “OpenAI attempts at planning.”
One of the main concerns of OpenAI, which is whether AI poses an existential risk to humanity, is that some believe that these capabilities might lead to rogue AI. According to Collins, safety issues might surface if these AI systems are given the freedom to choose their own objectives and begin interacting with the actual world—either digital or physical.
Although being able to solve arithmetic problems could get us closer to more potent AI systems, it doesn’t necessarily mean that a superintelligence has arrived.
Collins says, “I don’t think it gets us to scary situations or AGI immediately.” She continues, “It’s also critical to emphasize the types of mathematical problems that AI is able to solve.”
Collins says, alluding to a top mathematical award, “pushing the boundaries of mathematics at the level of something a Fields medalist can do is very, very different from solving elementary-school math problems.”
While elementary-level problem solving has been the focus of machine-learning research, cutting-edge AI systems still struggle with this task. According to Collins, certain AI models perform poorly on really basic math tasks yet well on extremely difficult ones.
If Q* is actually capable of solving arithmetic equations, then developing an AI system that can accomplish that is a neat advancement. For example, a stronger comprehension of mathematics may lead to applications that support engineering and scientific research. The capacity to provide mathematical answers might aid in the improvement of individualized teaching, speed up algebraic computation, or enable mathematicians to tackle more challenging issues.
Furthermore, the excitement surrounding AGI has not always been created by new models. The same criticisms were leveled about Google DeepMind’s Gato, a “generalist” AI model that can play Atari games, talk, caption photos, and stack blocks with a real robot arm, even a year ago. At the time, several AI experts asserted that DeepMind was “on the verge” of artificial general intelligence (AGI) due to Gato’s impressive versatility. Same PR machine, new AI research facility.
Furthermore, even if it may be excellent PR, these hype cycles hurt the industry as a whole by drawing attention away from the actual, pressing issues with AI. Reports of a potent new AI model may also be a major own goal for the tech industry, which is reluctant to follow regulations. For instance, the EU is almost done with its comprehensive AI Act. Legislators are currently engaged in a major power struggle over whether to provide tech corporations greater authority to independently regulate state-of-the-art AI models.
The board of OpenAI was created as the company’s internal kill switch and governance framework to stop the introduction of dangerous technology. The drama in the boardroom this past week has demonstrated that for big corporations, the bottom line will always come first.