Who is in Control of AI?

Countries around the world are working to implement legislation to regulate artificial intelligence's (AI’s) development and use. Last month at the AI Seoul Summit, AI technology companies from across the globe - including the US, China, the Middle East, and Europe - developed a set of agreements that pertain to thresholds for severe AI risks, such as in building biological and chemical weapons. The European Union's AI Act, a more comprehensive standard than the regulations existing in the US thus far, will go into effect next month.

The topic of AI regulation was the focus of a recent BBC discussion: The EU legislation, led by Commissioner Margrethe Vestager, aims to protect individuals from the risks associated with AI, such as biased decision-making. Meanwhile, the UK emphasised voluntary commitments from leading AI companies, though Stephanie Hare raised concerns about enforcement. In the US, Miles Taylor highlighted a decentralized regulatory approach, influenced by fears of falling behind China. Intellectual property issues, such as Scarlett Johansson’s case against OpenAI, also complicate the regulatory landscape. As AI evolves, international cooperation and robust regulatory frameworks will be crucial for ensuring a safe and ethical development.