There are those who believe that advanced AI poses a threat to humanity. The argument is that when AI systems become intelligent enough, they may hurt humanity in ways that we cannot foresee, and because they are more intelligent than us we may not be able to stop. Therefore, it becomes natural to want to regulate them, for example limiting which systems can be developed and who can develop them. We are seeing more and more people arguing that this regulation should take the form of law.
A result of this fantasies: only selected actors will develop AI systems.
We are at a critical juncture in AI governance. To mitigate current and future harms from AI systems, we need to embrace openness, transparency, and broad access. This needs to be a global priority.
A new wave of AI safety and security has started, how to make AI systems safe and secure is an open question.
Here are the key points from the Nature Computational Science article:
- AI is a key technology driving innovation, but increasing restrictions by companies on access to AI innovations could concentrate power and restrict access. This could lead to inequality in AI research, education, and public use.
- The authors argue for the importance of building AI systems according to open-source principles to promote accessibility, collaboration, responsibility, and interoperability.
- They discuss the benefits of open-source software and how a tailored approach is needed for open-source AI across datasets, source code, and models.
- They suggest actions to improve accessibility (funding, data/model repositories), collaboration (community building, iterative development), responsibility (access control, licensing, bias reduction), and interoperability (standardization, modular libraries).
- Open-source AI can complement proprietary AI to increase healthy competition and innovation. The authors call for broad adoption of open-source principles in AI to reduce inequality in AI development and use.