Safe AGI,ASI,AxI // IIA Davos 2025
There is a lot of hype surrounding artificial intelligence technologies. This often leads to anthropomorphism of technology and a disconnect between tech-savvy individuals and those less familiar with the field.
Key Points:
1. Definitions of AI Terms:
— Narrow AI: Specialized in specific tasks.
— AGI (Artificial General Intelligence): Exhibits human-like cognitive abilities.
— Superintelligence: Surpasses human intelligence across all domains.
2. Timeline for AGI Development:
— Predictions range from 5 to 20 years for a 50% probability of achieving AGI.
— Some suggest a 10–25% chance of AGI within 2–3 years.
3. Risks of AGI and Superintelligence:
— Potential existential threats if not controlled.
— Concerns about loss of control, misuse, and competition with humans.
4. International Collaboration:
— Essential for accelerating AI benefits and mitigating risks.
— Examples of successful collaborations include AlphaFold and contributions from Chinese researchers.
5. Proposals for Safe AI Development:
— Establish international bodies similar to CERN and IAEA for AGI research and safety.
— Promote scientific consensus and policy grounded in evidence.
— Encourage collaboration across academia, industry, and governments.
6. Agentic vs. Non-Agentic AI:
— Non-agentic AI (e.g., AlphaFold) is safer and can solve critical problems like curing diseases.
— Agentic AI (with goals and self-preservation) poses higher risks and requires strict controls.
7. Global Awareness and Governance:
— Governments must recognize shared existential risks and collaborate to prevent uncontrolled AGI development.
— Scientist-led initiatives and international summits are crucial for raising awareness and fostering cooperation.
8. Final Consensus:
— International collaboration is vital for ensuring AI development benefits all humanity and avoids catastrophic outcomes.