Highlights:
Here’s How OpenAI Will Determine How Powerful Its AI Systems Are
12/7/24
By:
BR Hariyani
OpenAI made a system to determine how smart its AI systems are, ranging from Level 1 to Level 5.
OpenAI has introduced an internal scale to track the advancement of its large language models toward achieving artificial general intelligence (AGI). This scale, which categorizes AI development into five levels, aims to provide a clearer framework for assessing progress toward AI with human-like intelligence, a spokesperson told Bloomberg.
The Five Levels of AI
Level 1: Basic Problem Solving Today’s chatbots, like ChatGPT, are categorized at Level 1. This level signifies basic problem-solving capabilities, similar to what we experience with current AI interactions.
Level 2: Advanced Problem Solving OpenAI claims it is nearing Level 2, which is defined as a system capable of solving problems at the level of a person with a PhD. This indicates a significant leap in cognitive abilities and specialized knowledge application.
Level 3: Autonomous Actions Level 3 AI agents are expected to take actions on a user’s behalf. This involves higher degrees of autonomy and decision-making abilities, moving beyond simple interaction and into complex task execution.
Level 4: Creation of Innovations At Level 4, AI can create new innovations. This level represents AI systems that not only solve existing problems but also develop novel solutions and ideas, contributing to technological and scientific advancements.
Level 5: Organizational Capabilities Level 5, the final step to achieving AGI, is AI that can perform the work of entire organizations of people. OpenAI has previously defined AGI as “a highly autonomous system surpassing humans in most economically valuable tasks.” This level represents the pinnacle of AI development, where machines can manage, innovate, and execute on a scale comparable to large human organizations.
OpenAI’s Commitment to Safe AGI Development
OpenAI’s unique structure is centered around its mission of achieving AGI safely and responsibly. The company has committed to halting its progress and assisting other projects if a value-aligned, safety-conscious project comes close to building AGI before OpenAI does. This commitment, detailed in OpenAI’s charter, underscores the importance of collaborative and ethical AI development.
The introduction of this grading scale provides a structured way to evaluate progress, not just for OpenAI but potentially for other AI developers as well. By defining specific criteria for each level, the scale helps clarify what constitutes advancements in AI capabilities.
Current State and Future Prospects
Despite the structured approach, AGI remains a distant goal. Achieving AGI will require immense computational resources and continued innovation. Timelines for reaching AGI vary widely among experts, with OpenAI CEO Sam Altman estimating a timeline of “five years, give or take,” as of October 2023.
This new grading scale was announced shortly after OpenAI’s collaboration with Los Alamos National Laboratory. This partnership aims to explore how advanced AI models like GPT-4o can safely assist in bioscientific research. The goal is to test GPT-4o’s capabilities and establish safety and performance benchmarks that could be used by public and private entities.
Challenges and Controversies
In May, OpenAI dissolved its safety team following the departure of co-founder Ilya Sutskever. This move raised concerns about the company's commitment to safety, especially as it pushes toward AGI. Jan Leike, a key researcher, resigned shortly after, citing a shift in focus from safety to product development. OpenAI has denied these claims, but the incident has sparked debate about the balance between innovation and safety in AI development.
Demonstrating Progress
OpenAI has yet to detail how it assigns models to these internal levels. However, company leaders recently showcased a research project using the GPT-4 AI model, which they believe demonstrates new skills and human-like reasoning. This project, discussed during an all-hands meeting, exemplifies the kind of advancements OpenAI aims to achieve and track using its new scale.
By providing a strict definition of progress, this scale aims to eliminate ambiguity in AI development. OpenAI CTO Mira Murati mentioned in June that the models in development are not much better than what is publicly available, while CEO Sam Altman highlighted recent significant advancements. This scale, therefore, serves as a benchmark for both internal assessment and public understanding of AI progress.
Stay updated with Kushal Bharat Tech News for more insights and developments in the world of AI and technology.
All images used in the articles published by Kushal Bharat Tech News are the property of Verge. We use these images under proper authorization and with full respect to the original copyright holders. Unauthorized use or reproduction of these images is strictly prohibited. For any inquiries or permissions related to the images, please contact Verge directly.
Latest News