Appendix I

Proof-Carrying Code
(PCC) is a software mechanism introduced by George Necula and Peter Lee in 1996. It serves as a method for a host system to verify properties about an application by utilising a formal proof that accompanies the application's executable code. The primary objective of PCC is to ensure the safety and security of executing applications by allowing the host system to quickly verify the validity of the proof and compare its conclusions to its own security policy.

Applications of PCC:

  1. Secure Software Distribution: PCC can enhance the security of software distribution channels by ensuring that only verified and trusted applications are executed on the host system.

  2. Safety-Critical Systems: Industries such as aerospace, automotive, and healthcare, which rely on safety-critical systems, can benefit from PCC to guarantee the integrity and reliability of software components.

  3. Mobile and IoT Devices: With the proliferation of mobile devices and Internet of Things (IoT) devices, PCC can help mitigate security risks associated with third-party applications and firmware updates.

  4. Network Security: PCC can be employed in network security applications to validate the behaviour of code executed within networked environments, reducing the risk of malicious activities.


What regulations are needed to move forward with developing AI? 

  1. Reframing Objectives for AI Systems: Instead of assigning specific objectives to AI systems, which can lead to unintended consequences, there's a shift towards defining AI goals as furthering human interests. This approach, known as "assistance games," ensures that AI systems prioritize human well-being and defer to human authority. Anthropic's Responsible Scaling Policy, for instance, can help guide the development process by encouraging collaboration between AI researchers, policymakers, ethicists, and other stakeholders. By incorporating human-centric goals into AI design, such as assisting humans and furthering human interests, organisations can mitigate the risk of unintended consequences and ensure that AI technologies align with societal values and norms. (for further research, see Appendix 10,11,12,13)

  2. Establishing Red Lines: Red lines serve as boundaries that AI systems should not cross, ensuring they do not engage in egregiously dangerous or unacceptable behaviours. These red lines must be well-defined, automatically detectable, and politically feasible to enforce.The policy established by the European AI Act aims to establish red lines for AI systems, defining boundaries that they should not cross to ensure they do not engage in egregiously dangerous or unacceptable behaviours. By establishing clear red lines and implementing stringent measures to enforce them, the AI Act aims to foster trustworthy AI development in Europe and beyond. This approach is essential for ensuring that AI systems respect fundamental rights, safety, and ethical principles while addressing risks associated with powerful and impactful AI models. Additionally, the AI Act's future-proof approach allows for adaptation to technological advancements, ensuring ongoing quality and risk management in AI development. Through these efforts, Europe seeks to position itself as a leader in the ethical and sustainable development of AI technologies.(for further research, see Appendix 14)

  3. Proof of Safety: Developers should provide proof of safety to regulators, demonstrating that AI systems will not violate red lines or pose significant risks. This approach parallels safety regulations in other industries, such as nuclear power, where developers must provide mathematical proofs of safety .OpenAI's policy emphasises rigorous safety measures for AI systems, drawing parallels with safety regulations in other industries like nuclear power. Key components include thorough testing, engagement with experts, regulatory engagement, learning from real-world use, protection of vulnerable users, privacy respect, factual accuracy improvement, and continued research and engagement. The overarching goal is to ensure AI systems are developed and deployed responsibly, with safety built in at all levels. (for further research, see Appendix 15)

  4. Enforcement and Accountability: Regulators need mechanisms to enforce regulations and hold AI developers accountable. This includes requiring AI systems to self-register, implement detectors for red line violations, and incorporate switches for shutting down non-compliant systems. Additionally, hardware manufacturers could play a role by ensuring that hardware checks the safety properties of AI software before execution.The policy regarding accountability in European regulations, as outlined in various documents including the High-Level Expert Group (HLEG) reports, the General Data Protection Regulation (GDPR), and the Artificial Intelligence Act (AIA), is characterized by a broad definition of accountability. In the HLEG reports, accountability is described both as a guiding principle to ensure compliance with key requirements for trustworthy AI and as a set of specific practices and measures such as audit, risk management, and redress for adverse impacts. The concept of accountability encompasses various dimensions and may include ethical standards beyond legal consequences. Similarly, the GDPR emphasizes accountability as a meta-principle for data controllers, requiring them to demonstrate compliance with GDPR requirements in processing personal data. This includes responsibilities such as ensuring fairness, transparency, purpose limitation, data minimization, and accuracy in data processing. The AIA also incorporates the concept of accountability within a risk-based regulatory framework, where providers and implementers of AI technologies are accountable in different ways depending on the risk level associated with the AI systems they develop or deploy (for further research, see Appendix 16)

To which extent is it safe to involve AI in the military, infrastructure, and police?

  • Military: AI can enhance military capabilities in numerous ways, such as autonomous vehicles, surveillance systems, and decision-making support. However, there are concerns about the potential for AI to be used in lethal autonomous weapons systems (LAWS), where AI systems make decisions to use lethal force without human intervention. Many experts and organizations advocate for strict regulations to ensure human control over such systems, emphasizing the importance of maintaining ethical standards and adhering to international humanitarian law.
    DOD Releases AI Adoption Strategy > U.S. Department of Defense > Defense Department News
    Military Applications of Artificial Intelligence: Ethical Concerns in an Uncertain World | RAND

  • Infrastructure: AI can improve the efficiency, safety, and maintenance of critical infrastructure such as transportation networks, energy grids, and water systems. For example, AI can optimize traffic flow, detect and respond to infrastructure failures, and enhance cybersecurity. However, there are risks associated with relying too heavily on AI systems for infrastructure management, including the potential for cyberattacks, data breaches, and system vulnerabilities. It's essential to implement robust cybersecurity measures and ensure that AI systems are thoroughly tested and monitored to mitigate these risks.
    Benefit of AI in proactive road infrastructure safety management: ITF findings published - iRAP

  • Police: AI technologies can assist law enforcement agencies in various tasks, including predictive policing, facial recognition, and analyzing large volumes of data to identify patterns and trends. However, there are concerns about the potential for bias and discrimination in AI-powered policing, as well as issues related to privacy and surveillance. It's crucial to develop and implement AI systems that are transparent, accountable, and fair, with mechanisms in place to address bias and protect individuals' rights.
    What Happens When Police Use AI to Predict and Prevent Crime? - JSTOR Daily