Laws & Policies

Some of the significant laws and policies related to AI across globe are :

European Union (EU)

  1.  EU AI Act : Proposed regulation categorizing AI applications into different risk levels with  corresponding requirements. 

    European Commission. (2021). Proposal for a Regulation laying down harmonised  rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union  legislative acts.  

    Source : https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206 

  2. General Data Protection Regulation (GDPR) Comprehensive data protection law  regulating personal data processing. 

    European Parliament and Council. (2016). Regulation (EU) 2016/679 (General Data  Protection Regulation) 

    Source : https://eur-lex.europa.eu/eli/reg/2016/679/oj 

  3. Ethics Guidelines for Trustworthy AI  

    Framework for developing AI in a trustworthy manner. 

    High-Level Expert Group on Artificial Intelligence. (2019). Ethics Guidelines for Trustworthy AI

    Source: https://chatgpt.com/c/5a43377e-6484-4f09-8d0e-11125a66ae4f 

United States (US)

  1.  National Artificial Intelligence Initiative Act (NAIIA)

    The NAIIA focuses on funding AI research  

    Establishes a coordinated program across the federal government to accelerate AI research  and development. 

    U.S. Congress. (2020). National Artificial Intelligence Initiative Act of 2020 

    Source : https://www.congress.gov/bill/116th-congress/house-bill/6216 

  2. Algorithmic Accountability Act 

    Requires companies to evaluate the impacts of automated decision systems and assess  their risks. 

    U.S. Congress. (2019). Algorithmic Accountability Act of 2019. 

    Source : https://www.congress.gov/bill/117th-congress/house-bill/6580/text 

  3. AI in Government Act 

    Promotes the use of AI to improve government operations and services. U.S. Congress. (2018). AI in Government Act of 2018

    Source : https://www.congress.gov/bill/116th-congress/house-bill/2575/text 

Canada

  1. Artificial Intelligence and Data Act (AIDA) 

    Proposed legislation under Bill C-27 aiming to regulate high-impact AI systems and ensure  responsible AI development. 

    Government of Canada. (2022). Bill C-27: Digital Charter Implementation Act, 2022.  

    Sources:https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence and-data-act-aida-companion-document 

  2. Personal Information Protection and Electronic Documents Act (PIPEDA): 

    Canada's federal privacy law for private-sector organizations that collect, use, and disclose  personal information in the course of commercial activities. 

    Office of the Privacy Commissioner of Canada. (2000). Personal Information Protection and  Electronic Documents Act (PIPEDA) 

    Source: https://laws-lois.justice.gc.ca/eng/acts/p-8.6/ 

Evaluation of laws of AI: These laws and policies represent significant steps toward regulating AI, but they generally suffer from a lack of adaptability to rapid technological changes, enforceable ethical standards, and comprehensive guidelines for mitigating long-term societal impacts. More dynamic and detailed regulations are needed to address these limitations effectively.

Case Studies: 

I. Clearview AI Investigation (Canada, 2020) 

1. Conflict: Clearview AI's facial recognition technology was found to be collecting and using  images of individuals without their consent, violating Canadian privacy laws. 

2. Resolution: Privacy Commissioners from Canada, Alberta, British Columbia, and Quebec  conducted a joint investigation, resulting in Clearview AI being ordered to stop collecting and  using images of Canadians and to delete previously collected images. 

3. Impact: This resolution reinforced the importance of consent and transparency in AI  applications and set a precedent for how privacy laws apply to AI technologies. 

Office of the Privacy Commissioner of Canada. (2021). Joint investigation of Clearview AI,  Inc.

Source: https://priv.gc.ca/en/opc-actions-and-decisions/investigations/investigations-into businesses/2021/pipeda-2021-001/ 


II. Facebook-Cambridge Analytica Data Scandal (UK/US, 2018) 

1. Conflict: Cambridge Analytica harvested personal data from millions of Facebook profiles  without consent, using it for political advertising purposes. 

2. Resolution: Facebook was fined $5 billion by the FTC in the United States and £500,000  by the Information Commissioner's Office (ICO) in the United Kingdom. The ICO also  demanded Cambridge Analytica delete the unlawfully obtained data. 

3. Impact: The case highlighted the need for stricter data protection regulations and led to the  implementation of more robust privacy policies by tech companies. 

Federal Trade Commission. (2019). FTC Imposes $5 Billion Penalty and Sweeping New  Privacy Restrictions on Facebook 

Source: https://www.europarl.europa.eu/doceo/document/TA-8-2018-0433_EN.html 

III. Uber's Self-Driving Car Fatality (US, 2018) 

1. Conflict: A pedestrian was killed by an autonomous Uber vehicle in Arizona, raising concerns  about the safety and reliability of self-driving technology. 

2. Resolution: Uber suspended its autonomous vehicle testing and overhauled its safety  protocols. The company implemented new safety measures, including the presence of two  safety operators in vehicles and more robust software for detecting pedestrians. 

3. Impact: This incident prompted stricter safety regulations and protocols for autonomous  vehicles, influencing legislation and standards across the industry. 

National Transportation Safety Board. (2019). Preliminary Report on Uber Technologies, Inc.,  Test Vehicle Pedestrian Fatality 

Source: STS Research Paper Presented to the Faculty of the School of Engineering and  Applied Science University of Virginia By Jui Tao Tsai April 2, 2020  

IV. GDPR Enforcement against Google (EU, 2019) 

1.Conflict: Google was found to be non-compliant with the General Data Protection  Regulation (GDPR) by failing to provide transparent information about data processing for  personalised ads. 

2.Resolution: The French data protection authority, CNIL, fined Google €50 million for GDPR  violations. Google was required to improve its transparency and obtain valid user consent for  data processing.

3. Impact: This case underscored the significance of GDPR in protecting user privacy and the  accountability of tech giants in complying with data protection laws. 

Commission Nationale de l'Informatique et des Libertés (CNIL). (2019). The CNIL's  restricted committee imposes a financial penalty of 50 Million euros against GOOGLE LLC 

Source: https://digitalmarketinginstitute.com/blog/gdpr-enforcement-how-the-recent-google fine-makes-a-statement