Appendix
I. Legislative reforms driven by Spain and Canada
Inclusive language practices are becoming increasingly prevalent, aiming to foster a more respectful and equitable discourse.
Debates surrounding pronouns and gendered language highlight the complexities of accommodating diverse identities while navigating linguistic norms.
Instances of controversies in language usage underscore the importance of sensitivity and awareness in public discourse
Case study: Spain
Spain changes constitution language on disabilities https://www.euronews.com/2024/01/18/spain-changes-constitution-language-on-disabilities
Spanish parliament OKs reform to scrap language discriminating against the disabled https://www.euractiv.com/section/politics/news/spanish-parliament-oks-reform-to-scrap-language-discriminating-against-the-disabled/
Masculine, feminist or neutral? The language battle that has split Spain. A new government plans to erase gender bias in the constitution. https://www.theguardian.com/world/2020/jan/19/gender-neutral-language-battle-spain
Case study: Canada
In Canada, there have been legislative efforts to promote inclusive language practices, such as the introduction of Bill C-16, which amended the Canadian Human Rights Act to include gender identity and gender expression as prohibited grounds of discrimination (and also to the Criminal Code provisions dealing with hate propaganda, incitement to genocide, and aggravating factors in sentencing). This has led to changes in government documents, educational materials, and public discourse to accommodate transgender and non-binary individuals.
II. Case study: How has Finland implemented age-appropriate media literacy lesson plans?
Based on Media Literacy (2019) in Finland by the Ministry of Education and Culture, the recent curricula revisions at all levels from early childhood education to secondary level and basic education for adults has been important for the promotion of media literacy. It continues a decades-long effort to promote democratic participation and reduce polarization in Finnish society.
The first media education curriculum was introduced in Finnish schools in 2004 through an action plan addressing violence in the media and media education, though media education initiatives have been present in Finnish schools since the 1950s.
Finland’s approach to media literacy is outlined in the National media education policy, published by the Ministry of Education and Culture in 2019, in collaboration with the National Audiovisual Institute. The promotion of media literacy is a cross-cutting activity for the Ministry of Education and Culture and has expanded to cover other areas of society and administration.
The concepts of misinformation and disinformation are part of student coursework, including the study of famous propaganda campaigns, advertising, and tactics for using misleading statistics.
Finnish media education involves a range of actors: non-government partners, such as civic organizations, schools, libraries, NGOs and universities, are involved in developing media education plans.
Finland also promotes media literacy in accordance with European Union guidance, such as the Audiovisual Media Services Directive (EU 2018/1808) and the Communication from the Commission on Tackling Online Disinformation
The National Audiovisual Institute, in cooperation with the Ministry of Education and Culture, is responsible for evaluating the implementation of the action plan
Source: https://national-policies.eacea.ec.europa.eu/youthwiki/chapters/finland/68-media-literacy-and-safe-use-of-new-media https://oecd.org/stories/dis-misinformation-hub/webbooks/dynamic/gov-mis-information-case-studies/d067f517/pdf/media-literacy-education-system.pdf
III. How platform algorithms and designs may unintentionally promote extremism? How are algorithms promoting radicalization? How technological advancements pushed changes in society?
Based on the panel discussion, panelist Jennifer Mather Saul suggested that algorithms and platform designs may unintentionally promote radicalization by amplifying certain types of extreme or divisive content through recommendation feedback loops. She proposed that researchers studying this issue work with technology experts to develop solutions.
Ongoing/prior studies on how social media algorithms may contribute to online radicalization:
"The spread of true and false news online" (2019) by researchers at MIT and Sorbonne analyzed over 126,000 stories tweeted by 3 million people on Twitter. They found false news spreads farther, faster, deeper and more broadly than the truth on the platform, in part due to the viral nature of misleading content. Source: https://www.science.org/doi/10.1126/science.aap9559
“Auditing YouTube’s recommendation system for ideologically congenial, extreme, and problematic recommendations” (2023) by Princeton University researchers. Although they do not find meaningful increases in ideological extremity of recommendations, they show that a growing proportion of recommendations deeper in the recommendation trail come from extremist, conspiratorial, and otherwise problematic channels. This increase is most pronounced among the right-leaning users. Source: https://www.pnas.org/doi/10.1073/pnas.2213020120
“Algorithmic extremism? The securitization of artificial intelligence (AI) and its impact on radicalism, polarization and political violence” by Joe Burton. This paper analyses how AI and algorithms are being used to radicalize, polarize, and spread racism and political instability. Source: https://www.sciencedirect.com/science/article/abs/pii/S0160791X23000672
Social media, extremism, and radicalization (2023) by Aaron Shawk. Fears that YouTube recommendations radicalize users are overblown, but social media still host and profit from dubious and extremist content. Source: https://www.science.org/doi/10.1126/sciadv.adk2031
EXAMINING ALGORITHMIC BIAS AND RADICALIZATION ON YOUTUBE, by Homa Hosseinmardi, a post-doctoral research associate at the University of Pennsylvania's Computational Social Science Lab. Her conclusion is that trends in video-based political news consumption are determined by a complicated combination of user preferences, platform features, and the supply-and-demand dynamics of the broader web, rather than simply the policies and algorithmic properties of a single platform. Source: https://infosci.cornell.edu/content/examining-algorithmic-bias-and-radicalization-youtube
Disinformation, Radicalization, and Algorithmic Amplification: What Steps Can Congress Take? By Amb. (ret.) Karen Kornbluh, Senior Fellow and Director of the Digital Innovation and Democracy Initiative at the German Marshall Fund of the United States and former U.S. Ambassador to the Organization for Economic Cooperation and Development. Source: https://www.justsecurity.org/79995/disinformation-radicalization-and-algorithmic-amplification-what-steps-can-congress-take/
"Rewiring what-to-watch-next Recommendations to Reduce Radicalization Pathways", authored by Francesco Fabbri, Yanhao Wang, Francesco Bonchi, Carlos Castillo and Michael Mathioudakis; they develop algorithms that video and other Web platforms could use to make minimal changes to their recommendations so users don't keep seeing misinformation and extremist content. This paper won the 2022 Best Paper Award in the Web Conference. Source: https://www.helsinki.fi/en/news/digitalisation/study-proposing-better-algorithms-avoid-radicalization-won-best-paper-award
IV. What organizations are currently working on AI being accessed by diverse language communities - how does this lack of resource have to do with political correctness?
During the panel discussion, Jillian C. York talked about how platforms historically lacked moderators for languages (like Arabic), despite societal harms, due to representation gaps in governance. She criticized early platforms for relying primarily on AI for moderation without input from represented communities. Ensuring AI systems and policies reflect diverse linguistic/cultural perspectives helps address political correctness concerns around an inclusive and unbiased treatment of all groups. A lack of diverse representation in developing AI systems and content policies risks perpetuating biases and political correctness issues by not properly considering all cultural viewpoints.
Some organizations are currently working on AI being accessed by diverse language communities:
Electronic Frontier Foundation, non-profit digital rights group directed by Jillian C. York, that advocates for platform reform and diversity in decision-making to better serve global users. They also seek to improve AI in realms related to non-discrimination.
How do we prevent machine learning systems from producing racially biased results, or from engaging in other problematic forms of "profiling"? Source: https://www.eff.org/issues/ai
Alan Turing Institute, national institute for data science and artificial intelligence, examines how to build more culturally-sensitive AI systems. Source: https://www.turing.ac.uk/research/research-programmes/artificial-intelligence
V. How has the Digital Services Act made things better for people? How has it affected this "issue" of political correctness or has it made for inclusive online spaces?
During the panel discussion, Jillian York advocated for regulations requiring greater transparency from platforms, similar to what the EU's Digital Services Act (DSA) aims to accomplish. When asked about proposed solutions to content moderation challenges across languages, York highlighted the DSA as a positive step that could help address issues of representation and inconsistent policy application globally. She mainly referenced it as an example of the type of regulation that could promote improved practices around transparency and diversity in decision-making.
According to the European Commission, these are the main areas of impact of the Digital Services Act on digital platforms. The DSA significantly improves the mechanisms for the removal of illegal content and for the effective protection of users’ fundamental rights online, including freedom of speech:
Easier reporting of illegal content
Greater transparency in content moderation and more options to appeal
More knowledge and choice over what we see – and more control on personalisation options
Zero tolerance on targeting ads to children and teens and on targeting ads based on sensitive data
Protection for children
Integrity of elections
New obligations on traceability of business users in online marketplaces
Source: https://digital-strategy.ec.europa.eu/en/policies/dsa-impact-platforms
VI. Efforts to promote diversity and inclusion in scientific inquiry, which aim to address long standing biases and disparities in research.
Debates over terminology and representation in scientific discourse highlight the role of language in shaping perceptions and inclusivity within the scientific community.
Challenges persist in ensuring equitable access and representation in scientific fields, necessitating ongoing efforts to foster inclusivity.
Case study: European Union's Horizon 2020 program. Horizon 2020 was the EU's research and innovation funding programme from 2014-2020 with a budget of nearly €80 billion. Horizon 2020 was the first framework programme to set gender as a cross-cutting issue, with one of the underpinning objectives being to integrate the gender dimension into research and innovation content, leading to an increased number of “gender-flagged” topics across the programme.