Calls for Action
National governments establish comprehensive regulatory frameworks that govern the development, deployment, and use of AI systems, balancing innovation with risk management and compliance.
Developers will follow risk-based (see Appendix I) “red line” standards through hardware that will check the safety properties of any software object before execution (“Proof carrying code” — see Appendix I for more information). Safety standards would include:
Systems should not replicate themselves.
Systems should not break into other computer systems.
Systems should not advise on how to build biological and nuclear weapons.
Systems should not make military decisions.
Systems should not defame real individuals.
Systems cannot divulge classified information.
Systems are created with non-removable safety switches which can be turned off if a red line is violated.
AI developers must promote human and AI collaboration:
Developers and legislatures must begin considering AI welfare as AI systems become more advanced
AI Systems are automatically self-registered when developed (see Appendix, no. 9):
Regulators must know where AI systems are located, the systems’ potential risks, and who is operating them.
Main Outcomes:
AI systems safeguard human interests, mitigate risks, and ensure safety amidst technological advancement.
AI governance is holistic: encompasses AI welfare, mitigates risks from malicious actors, and fosters collaborative relationships between humans and AI systems.
Proactive regulatory measures address the rapidly evolving AI landscape, pre-empt potential risks, and facilitate innovation. Governments must install regulatory frameworks that can adapt to technological advancements while upholding ethical standards and transparency.
Panel Discussion summary
The panel discussion on AI regulation at the World Forum on the Future of Democracy, Tech, and Humankind is moderated by Claudia Bechstein. The panellists include Prof. Stuart Russell, Gabriella Mazzini from the European Commission, Professor Jeff Sebo from NYU, CEO Conor Leahy of Conjuncture AI, and Prof. James Broughel from the Competitive Enterprise Institute.
Stuart Russell emphasises the need for AI systems to align with human interests and proposes a system's games approach. Gabriele Mazzini discusses the risk-based approach in the EU's proposed AI Act. Jeff Sebo highlights the importance of considering AI welfare and its implications. Conor Leahy emphasises the potential risks of AI surpassing human intelligence and advocates for proactive regulation. James Broughel shifts the focus to potential threats from bad actors and foreign adversaries.
Ethical concerns took centre stage, with participants drawing parallels between AI development and parenting. They likened the process to instilling values in children, emphasising the need for regulatory frameworks to govern AI's ethical usage. The discourse underscored the pivotal role of regulations in shaping AI's trajectory and mitigating potential harm. Amidst discussions on regulation, participants highlighted the vast potential of AI to revolutionise industries and improve human welfare. From boosting economic growth to enhancing healthcare and education, AI was portrayed as a powerful force for positive change. However, this optimism was tempered by concerns over safety, transparency, and societal impacts.
The conversation then delved into the challenges posed by opaque AI systems, where internal workings remain obscure—a phenomenon colloquially termed "black box" AI. Participants stressed the importance of transparency and accountability in AI development and deployment, advocating for mechanisms to ensure responsible AI use. As the dialogue progressed, attention turned to designing regulatory frameworks capable of accommodating AI's rapid advancements while fostering innovation. Participants called for a reevaluation of existing regulatory processes to address the complexities of governing AI effectively. Collaboration among policymakers, regulators, and industry stakeholders was deemed crucial in navigating the evolving AI landscape.
In conclusion, the panel emphasised the need for a balanced approach to AI governance—one that harnesses its potential while mitigating risks. Proactive measures, including robust regulation, transparency, and accountability mechanisms, were underscored as essential for shaping AI's future in a manner that benefits society at large.