Eric Drexler’s article revisits his 2019 report, «Reframing Superintelligence,» in light of recent advancements in Large Language Models (LLMs). He argues that the original framework, which emphasizes AI services over unitary agents, remains relevant despite LLMs’ unexpected capabilities. The article advocates for a broader perspective on AI safety, focusing on structured AI systems and development processes rather than solely on controlling superintelligent agents. Drexler suggests that AI services offer a more manageable and safer path to general intelligence, while acknowledging the risks associated with both AI services and the potential for misaligned human actors.
What is the main idea of «Reframing Superintelligence»?
Expanding the ontology of superintelligence; includes AI service compositions; emphasizes structures and relationships.
This perspective promotes the Comprehensive AI Services (CAIS) model, viewing general intelligence as a property of flexible service systems.
What is the Comprehensive AI Services (CAIS) model?
General intelligence as flexible services; task-focused agents as components; expanding AI services.
What is a key difference between AI services and AI agents?
AI services are task-focused; AI agents are unitary and potentially riskier; AI services offer safer development.
Artículo Original: https://www.alignmentforum.org/posts/LxNwBNxXktvzAko65/reframing-superintelligence-llms-4-years
Advices:
- Focus on developing AI services rather than unitary AI agents to mitigate risks and improve safety.
- Prioritize R&D automation using task-focused AI systems to decouple AI improvement from AI agency.
- Explore structured AI systems and modular deep learning to connect AI safety studies with current R&D practices.
Deja una respuesta