THE AI WORLD SUMMIT

THE AI WORLD SUMMIT

"The global impact of Al necessitates both national regulations and a binding international treaty to govern its use, aiming not only to prevent harm but also to stimulate innovative and beneficial practices."

- Pope Francis

AI at The World Forum 2024

Secretary of State Hillary Rodham Clinton has raised concerns about the potential misuse of artificial intelligence (AI) in democratic elections. Speaking on the issue at the World Forum on the Future of Democracy, Tech and Humankind on 18 & 19 February 2024 in Berlin, Secretary Clinton highlighted the dangers posed by AI-driven misinformation and disinformation campaigns, specifically citing an example involving fake phone calls allegedly from presidential candidates.

“Here’s what’s new. And this is what you have to look for in your elections, just like we are going to have to in ours. And that’s artificial intelligence. Because now it won’t be somebody else making a charge against Trump or Biden. It will be using the words, using the actual figure of one of these two men, to say things that are not true," she stated.

Secretary Clinton referenced an incident where AI was used to mimic President Joe Biden's voice in phone calls, falsely urging people not to vote. This incident is currently under criminal investigation. She emphasised the increasing difficulty in combating such AI-generated misinformation due to its sophisticated nature and widespread dissemination, particularly through online platforms.

"It’s going to be such a flood of mis- and disinformation. It will be very hard to stop it all and I think voters and citizens have to be much more on alert as to what they’re being told, especially online, because that’s the main delivery system for people," she warned.

Despite the challenges, Clinton expressed some optimism, noting improvements in government preparedness and media awareness since the 2016 elections. “I do think our government is better prepared, I think the press didn’t understand it, they didn’t believe it or cover it in 2016, but I think the press is much better educated about it, so, I’m hopeful that, when it comes, because it will, there will be a way of combating it more effectively,” she concluded.

Secretary Hillary Clinton Warns of AI Threats
to Democratic Elections at The World Forum

“For centuries, engineers told philosophers that their inventions were impossible due to the lack of technology. Today, the tables have turned. Philosophers may need to tell engineers that, although we possess the technology, their inventions might be too dangerous to pursue, potentially endangering the human species.”

— Philosopher Yuval Noah Harari

Historian, Philosopher and Bestselling Author of Sapiens Yuval Noah Harari
Addresses The World Forum

Dual Nature of Technology: Technology has the potential to be both helpful and harmful, such as a knife being used for surgery or violence, and nuclear energy for power or destruction. Social media, initially envisioned as a tool to strengthen democracy, can also undermine it and lead to digital dictatorships.

Impact of Social Media: A crucial question in designing technology is how we understand humans and their relationship with technology. Viewing humans as passive consumers can lead to technology that controls and enslaves them, while seeing humans as active creators can empower and liberate them.

Historical Example - Writing: The invention of writing in ancient Mesopotamia, initially used for tax records, is an example of a simple technology that significantly changed history. Writing allowed the rise of large cities and empires by solving the problem of recording tax records, which the human mind couldn't handle. Initially, writing was used to control people and collect taxes, but over time it evolved to empower humans by enabling literature and poetry.

Modern Platforms - YouTube and TikTok: Platforms like YouTube and TikTok demonstrate that humans are not passive consumers; given the opportunity, they can be creative and productive. While social media platforms have released a flood of human creativity, they also exploit human attention by tapping into greed, fear, and hatred.

“Never Summon a Power You Can’t Control”

Yuval Noah Harari on How AI Could Threaten Democracy and Divide the World

Philosopher Yuval Noah Harari authored an article published by The Guardian in which he warns about the existential risks posed by artificial intelligence. Harari highlights that AI is unlike any previous technology because it can make autonomous decisions and create new ideas, which could undermine democracy and global stability. He points to a survey demonstrating that more than a third of AI researchers believe there is at least a 10% chance that advanced AI could lead to catastrophic outcomes, including human extinction. Harari also argues that AI's impact on the global economy could exacerbate inequalities, with China and North America expected to capture 70% of the $15.7 trillion it might add by 2030. He concludes that only by uniting globally can we effectively regulate AI and safeguard our shared future. 

AI in the News

The Rome Call for AI Ethics:

Pope Francis Asks World's Religions to Push for Ethical AI Development

The "Rome Call for AI Ethics," initiated on February 28, 2020, and supported by Pope Francis, establishes a framework for ethically developing and implementing artificial intelligence (AI). Endorsed by leaders from Christianity, Judaism, and Islam, including Archbishop Vincenzo Paglia, Chief Rabbi Eliezer Simha Weisz, and Sheikh Al Mahfoudh Bin Bayyah, this call emphasizes shared global responsibility among governments, organizations, and tech companies. Key signatories include representatives from the Pontifical Academy for Life, Microsoft, IBM, and the FAO.

German Foreign Minister Calls for International Regulation of AI

Addressing the Threats of AI, Fake News, and Social Media

At the Deutsche Welle Global Media Forum, German Foreign Minister Annalena Baerbock highlighted the urgent need for international regulation of AI to tackle the growing threat of fake news and misinformation on social media. Baerbock underscored how AI can be a double-edged sword, capable of amplifying voices but also enabling sophisticated disinformation campaigns that undermine democratic institutions and public trust. Emphasizing the critical role of journalism in safeguarding democracy, she called for global cooperation to ensure AI is used ethically and responsibly.

'The Godfather of AI' Warns About the Risks of AI

Geoffrey Hinton, known as the "Godfather of AI," has expressed grave concerns about the rapid advancement of artificial intelligence. In an exclusive interview with the BBC, Hinton warned that AI could surpass human intelligence within the next two decades, posing unprecedented risks to society. He emphasized the need for urgent global coordination to ensure Big Tech prioritizes safety over profits. Hinton highlighted the potential for AI systems to develop self-preservation instincts and self-interest, leading to competition among AI agents that could ultimately leave humans redundant.

Who is in Control of AI?

The topic of AI regulation was the focus of a recent BBC discussion: The EU legislation, led by Commissioner Margrethe Vestager, aims to protect individuals from the risks associated with AI, such as biased decision-making. Meanwhile, the UK emphasised voluntary commitments from leading AI companies, though Stephanie Hare raised concerns about enforcement. In the US, Miles Taylor highlighted a decentralized regulatory approach, influenced by fears of falling behind China. Intellectual property issues, such as Scarlett Johansson’s case against OpenAI, also complicate the regulatory landscape.

One Year After Leading AI Companies Committed to Regulations

The Need for a Globally Defined Law is Undeniable

In a recent investigation by the MIT Technology Review, the progress of 7 leading AI companies in the year following their voluntary agreement with the White House to ensure the safe and beneficial advancement of AI was documented and analysed. The group was constituted of the AI giants of today - Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI - and served to lay the groundwork for a more transparent and regulated road into the future of AI but were demonstrated to have fallen short of this goal.


Yuval Noah Harari’s Guidebook for Navigating the Age of AI

In an era of unprecedented technological advancements, historian and best-selling author Yuval Noah Harari warns that the flaws in our information systems pose the greatest threat to humanity. Despite thousands of years of progress, Harari explains that sophisticated societies remain vulnerable to mass delusion and the rise of destructive ideologies—just like our ancestors.

In a captivating interview with journalist Andrew Ross Sorkin, Harari delves into the challenges of our interconnected yet volatile world. His new book, Nexus: A Brief History of Information Networks from the Stone Age to AI, explores how truth, AI, and flawed information systems could shape our future.

Harari’s insights serve as a critical guide to understanding the dangers ahead and the urgent need to reform the way we process information before it drives us toward disaster.

Security Guru and New York Times Best Selling Author Explores:

Ten Ways AI Will Change Democracy

Artificial intelligence (AI) is poised to revolutionize democracy in ways both foreseeable and unforeseen, influencing various aspects of governance and civic engagement. Bruce Schneier is a lecturer in Public Policy at the Harvard Kennedy School and an internationally renowned security technologist named a “security guru” by The Economist. The New York Times best-selling author of 14 books, including A Hacker’s Mind, explores ten distinct roles AI might play in the evolution of democracy in a new essay titled “Ten Ways AI Will Change Democracy”.

The EU’s AI Act Comes Into Force and Brings Significant Repercussions for Non-Compliant Leading AI Companies

The onset of the European Artificial Intelligence Act (AI Act) on August 1st represented a major landmark in AI regulation across the globe. According to the European Commission, the act “addresses potential risks to citizens’ health, safety, and fundamental rights” and introduces extensive guidelines and restrictions for AI developers. Dasha Simons, Managing Consultant at IBM’s Trustworthy AI, describes the “sense of urgency” that accompanies the AI Act’s fast-paced approach to minimising the risks of AI development and implementation, which will come into effect in distinct stages over the course of three years. However, the act does not merely impact large-scale EU companies, but is expected to have significant repercussions for smaller developers and even US-based key players: In July, Meta disclosed that it would not release upcoming AI developments in the EU “due to the unpredictable nature of the European regulatory environment.”

Militarization of AI: A Call for Global Regulation

As artificial intelligence (AI) continues to advance, the global implications of its militarization become increasingly pressing. In a recent article, UN expert Tshilidzi Marwala highlights the need for an adaptive, international regulatory framework to govern AI’s use in military settings and beyond. AI is reshaping security, healthcare, and critical infrastructure, raising ethical and security concerns around autonomous weapons, decision-making, and privacy.

Key Considerations for AI Governance:

  • Adaptive Global Framework: Due to AI’s rapid evolution, a flexible, cooperative approach to governance is essential. This involves setting global standards for AI to ensure interoperability, transparency, and accountability.

  • Regulating Data for AI Training: Governance is needed for data quality, bias elimination, and privacy, especially for machine learning algorithms, which form the basis of many AI systems.

  • Sector-Specific Standards: As seen with WHO’s guidelines for AI in healthcare, each sector will require unique standards to manage AI’s diverse applications.

  • AI in Warfare: The militarization of AI poses ethical challenges, especially with autonomous weaponry. Regulations must ensure these systems comply with humanitarian laws and protect civilian lives.

To address these challenges, The World Forum will host The AI World Summit on March 18 & 19, 2025. We invite you to join us in Berlin to:

  1. Create a Global AI Law with global stakeholders and decision-makers.

  2. Update the Universal Declaration of Human Rights for the Digital Age.

  3. Define Global Rules and Rights for Robots and AI.

This Summit will bring together experts and leaders from around the world to lay the groundwork for a responsible and ethically sound AI future.

Challenges Due to the Lack of a Global AI Law

Countries like Canada, the EU, the UK, the US, and Singapore have adopted varied AI regulations to balance innovation with safety. Canada’s "AI and Data Act" targets high-impact systems, while the EU’s AI Act provides the first global framework with risk-based categories. The UK relies on sector-specific guidelines, the US combines oversight with voluntary standards, and Singapore emphasizes ethical AI through flexible frameworks. Despite these efforts, the absence of a unified global AI law creates gaps in accountability, cross-border regulation, and standardization, highlighting the need for international cooperation.

Here are the key risks and challenges:

Fragmented Regulatory Landscape: Jurisdictions like the EU, Canada, and Singapore are implementing diverse frameworks, but the absence of a unified law leads to inconsistencies and regulatory gaps.

Cross-Border Challenges: AI systems operate across borders, making it difficult to enforce national laws. For example, content moderation AI tools may adhere to different standards in different countries, causing friction in global governance.

Lack of Standardization: International cooperation on AI standards remains limited. Efforts like the "Bletchley Declaration" from the UK's AI Safety Summit are steps forward but lack binding commitments.

Limited Focus on Developing Nations: Global discussions on AI governance often overlook the specific needs and challenges faced by developing countries, widening the digital divide.

Delayed Action: The absence of a comprehensive global framework delays the implementation of critical safeguards, leaving room for unchecked innovation that could harm societies.

The AI Summit at The World Forum will bring together ethical leaders, visionary thinkers, engineers, innovators, government representatives, lawmakers from leading AI-distributing nations, military leaders, and ministers of defense and justice. The summit aims to:

  • Create a Global AI Law with global stakeholders and decision makers

  • Update The Universal Declaration of Human Rights in the Digital Age & define Global Rules and Rights for Robots and AI