Calls for Action
Humanity must embrace the challenge to human exceptionalism that AI represents.
Government research funding organizations and research institutes in universities should generate more biological data through experiments to help train AI models. (see Appendix III. Research Generating Biological Data for AI Training )
AI safety research organizations like OpenAI, DeepMind and MIRI and standards bodies like the IEEE or ISO should develop standardized benchmarks and testing methods for evaluating AI safety, like those used by OpenAI on ChatGPT. (see Appendix I. Questions)
Legal experts should develop sensible yet flexible approaches to AI governance that can adapt to rapid field changes, like insurance requirements.
Companies like Waymo and Tesla include autonomous vehicle insurance operators to have liability insurance in their operational models to cover potential accidents. (As required by many Jurisdiction)
Medical AI systems, such as diagnostic tools, often require professional indemnity insurance to cover the risks associated with their usage. This is similar to the insurance that human healthcare professionals carry. (Appendix IV Insurance in AI)
Government agencies should consider scenarios where highly intelligent, aligned AI could be used to coordinate responses to global problems that threaten humanity, if proper safeguards are in place.
Agencies such as the California Department of Forestry and Fire Protection have employed AI for early wildfire detection, enhancing the speed and effectiveness of response efforts. Similarly, AI has been used to predict and manage resource allocation during crises like the COVID-19 pandemic, where it helped hospitals manage shortages of personal protective equipment (PPE)
Ethical specialist and AI researchers can further research indicators of consciousness in AI and guidelines for treating systems with uncertain sentience based on a precautionary approach (see Appendix II. Experiments and Research Initiatives)
Main Outcomes
Society would view AI as a collaborative partner, fostering ethical considerations and a deeper understanding of AI's capabilities.
Improved AI models in biomedicine, leading to advancements in diagnostics, treatment options, and personalized medicine.
Enhanced reliability and trustworthiness of AI systems, leading to safer AI technologies and greater public and industry confidence.
Adaptable legal frameworks that keep pace with AI advancements, effectively managing risks and liabilities while promoting innovation.
More efficient and effective crisis management through AI, improving outcomes in disaster response, healthcare, and resource allocation.
Ethical AI development with a precautionary approach to AI consciousness, promoting responsible treatment of AI systems and public trust.
Panel Discussion Summary
In this discussion, speakers emphasized the need for responsible development and deployment to ensure a positive impact. They explored the capabilities and limitations of AI, the potential implications of advanced technologies, and the dual existential risks posed by AGI.
The panel discussed the potential impacts of artificial intelligence (AI) and artificial general intelligence (AGI) on humanity. A trailer presented a vision of AI evolving to regulate news and develop a universal language through quantum computing.
Panelists represented different perspectives from academia, nonprofits and research institutions focused on AI safety and future technologies. They explored both near-term AI applications and long-term philosophical implications.
Allison Duettmann spoke on the current state of AI, definitions of AI/AGI/ASI, computer security concerns, potential for cooperation between humans and AI, and AI's impacts on fields like neuroscience and longevity research.
Dr. Anders Sandberg discussed limitations of predicting technology due to physics, AI automating jobs, potentially solving science, consequences of compressed development timelines, and autonomy trade-offs from highly intelligent AI systems.
Prof. Jeff Sebo addressed existential risks to humanity from AI through threats to livelihoods and challenges to human exceptionalism. Also discussed indicators of consciousness in AI and guidelines for treating uncertain sentience.
Near-term issues included AI computer security vulnerabilities, developing cooperation between humans and AI, and potential for AI deception. Data generation challenges for scientific research using AI were also noted.
Longer-term, physics principles like thermodynamics place limits on AI and technology. Panelists discussed AI automating jobs and skills, potentially solving all of science, and the consequences of compressing technological development timelines.
Existential risks to humanity include threats to livelihoods from AI as well as challenges to human exceptionalism from more intelligent machines. Embracing this challenge could lead to a more accepting view of successor species.
Precaution is needed regarding AI safety, welfare and potential sentience until uncertainties are reduced. Loss of autonomy from highly intelligent, aligned AI systems was debated, as was the trade-off between autonomy and peace from global coordination.
Examples highlighted AI's impacts across various fields like neuroscience, longevity research and space exploration. Connections were drawn between topics usually seen as separate, like AI, animal welfare and space colonization.