Amid the Sam Altman-OpenAI saga, several artificial intelligence researchers at OpenAI had warned about a powerful AI discovery that could prove to be a threat to humans in a letter, according to Reuters. The report cited the letter and other grievances as among the reasons for Altman’s firing.
Don't Miss our Black Friday Offers:
- Unlock your investing potential with TipRanks Premium - Now At 40% OFF!
- Make smarter investments with weekly expert stock picks from the Smart Investor Newsletter
One of OpenAI’s key projects, Q* (pronounced as Q-Star) could prove to be a breakthrough in artificial general intelligence (AGI). OpenAI defines AGI as “autonomous systems that surpass humans in most economically valuable tasks.”
The report cited the unknown source as saying that the Q*’s AI model was able to solve grade-school level mathematical problems. Solving mathematical problems is considered to be a new frontier when it comes to the development of generative AI.
The letter also highlighted AI’s prowess and risks, touching on safety concerns. The company’s researchers also hailed the work of an “AI scientist” team focusing on optimizing AI models for scientific tasks.
Altman, instrumental in ChatGPT’s growth, had stated at the recent Asia-Pacific Economic Cooperation summit, “Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I’ve gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime.”
The board dismissed him a day after this statement despite his recent announcements and positive outlook on AI progress.
Tech giant Microsoft (NASDAQ: MSFT) has been a key investor in OpenAI. Even as Altman has returned back to OpenAI, questions still remain about why he was ousted in the first place.