Draft:AI Pluralism
Submission declined on 11 November 2025 by Pythoncoder (talk).
Where to get help
How to improve a draft
You can also browse Wikipedia:Featured articles and Wikipedia:Good articles to find examples of Wikipedia's best writing on topics similar to your proposed article. Improving your odds of a speedy review To improve your odds of a faster review, tag your draft with relevant WikiProject tags using the button below. This will let reviewers know a new draft has been submitted in their area of interest. For instance, if you wrote about a female astronomer, you would want to add the Biography, Astronomy, and Women scientists tags. Editor resources
|
| Submission declined on 4 November 2025 by MCE89 (talk). This draft's references do not show that the subject qualifies for a Wikipedia article. In summary, the draft needs multiple published sources that are: Declined by MCE89 7 days ago.
|
AI pluralism is an approach to artificial intelligence (AI) development, evaluation and governance that aims for systems to reflect or accommodate a diversity of values, perspectives and affected stakeholders. In research on alignment and human–computer interaction, scholars frame pluralism as an alternative to single‑objective optimization: pluralistic systems should surface or steer among reasonable viewpoints and support democratic oversight of their impacts.[1][2] International instruments on AI governance likewise emphasise human rights, democratic oversight and inclusion of affected communities.[3][4]
Definitions and scope
[edit]Technical literature distinguishes several modes of pluralism in AI systems. Sorensen et al. propose three: Overton pluralism (surfacing a spectrum of reasonable responses), steerable pluralism (adapting to a specified viewpoint) and distributional pluralism (matching relevant population distributions).[1] Later work introduces temporal pluralism, reflecting different stakeholders’ values at different times.[5] Philosophical treatments relate AI pluralism to value pluralism and democratic legitimacy in alignment, highlighting the need to specify who decides which values govern a system.[6]
Methods and evaluation
[edit]Research explores pluralistic behavior both empirically and through system design. A 2025 peer‑reviewed study proposed pluralism as a benchmark for generative AI chatbots, comparing models’ ability to acknowledge and preserve divergent values relative to a human sample.[7] Technical approaches include multi‑model collaborations to support multiple perspectives (e.g. Modular Pluralism)[8] and datasets and models that represent pluralistic values.[2]
Relation to AI governance
[edit]Pluralism in deployment and governance overlaps with transparency, accessibility and accountability practices. Documentation frameworks such as model cards support external review of model behavior,[9] while internal algorithmic auditing frameworks address accountability across the system life‑cycle.[10] Accessibility standards (e.g. WCAG 2.2) are often cited as part of inclusive design in AI‑mediated interfaces,[11] and coordinated vulnerability disclosure and PSIRT processes are used to handle safety incidents.[12]
Implementations and indices
[edit]Pluralism‑adjacent comparative efforts include the Foundation Model Transparency Index, which scores developers across 100 transparency indicators and publishes periodic updates.[13][14] A proposed project titled the AI Pluralism Index (AIPI) describes a measurement framework for pluralistic governance across four pillars (participatory governance, inclusivity and diversity, transparency, accountability). It was introduced in an October 2025 preprint and publishes releases on a project website.[15][16] As of November 2025, significant independent secondary coverage of AIPI has been limited; its status is primarily documented in the preprint and project materials.
See also
[edit]References
[edit]- ^ a b Taylor Sorensen (2024). "A Roadmap to Pluralistic Alignment". arXiv:2402.05070 [cs.AI].
- ^ a b Sorensen, Taylor; Jiang, Liwei; Hwang, Jena D.; Levine, Sydney; Pyatkin, Valentina; West, Peter; Dziri, Nouha; Lu, Ximing; Rao, Kavel; Bhagavatula, Chandra; Sap, Maarten; Tasioulas, John; Choi, Yejin (2024). "Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties". Proceedings of the AAAI Conference on Artificial Intelligence. 38 (18): 19937–19947. doi:10.1609/aaai.v38i18.29970.
- ^ "The Framework Convention on Artificial Intelligence and human rights, democracy and the rule of law". Council of Europe. 5 September 2024. Retrieved 5 November 2025.
- ^ "Recommendation on the Ethics of Artificial Intelligence". UNESCO. 26 September 2024. Retrieved 5 November 2025.
- ^ T. Q. Klassen (2024). "Pluralistic Alignment Over Time". arXiv:2411.10654 [cs.CL].
- ^ Kasirzadeh, Atoosa (10 October 2024). "Plurality of value pluralism and AI value alignment". OpenReview. Retrieved 5 November 2025.
- ^ Novis-Deutsch, Nurit; Elyoseph, Tal; Elyoseph, Zohar (14 July 2025). "How much of a pluralist is ChatGPT? A comparative study of value pluralism in generative AI chatbots". AI & Society. doi:10.1007/s00146-025-02450-3. Retrieved 5 November 2025.
- ^ Feng, Shangbin; Sorensen, Taylor; Liu, Yuhan; Levine, Sydney; Tsvetkov, Yulia (2024). "Modular Pluralism: Pluralistic Alignment via Multi-LLM Collaboration". Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. ACL Anthology. pp. 4151–4171. doi:10.18653/v1/2024.emnlp-main.240. Retrieved 5 November 2025.
- ^ Mitchell, Margaret; Wu, Simone; Zaldivar, Andrew; Barnes, Parker; Vasserman, Lucy; Hutchinson, Ben; Spitzer, Elena; Raji, Inioluwa Deborah; Gebru, Timnit (2019). "Model Cards for Model Reporting". Proceedings of the Conference on Fairness, Accountability, and Transparency. pp. 220–229. arXiv:1810.03993. doi:10.1145/3287560.3287596. ISBN 978-1-4503-6125-5.
- ^ Raji, Inioluwa Deborah; Smart, Andrew; White, Rebecca N.; Mitchell, Margaret; Gebru, Timnit; Hutchinson, Ben; Smith-Loud, Jamila; Theron, Daniel; Barnes, Parker (2020). "Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing". Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. pp. 33–44. doi:10.1145/3351095.3372873. ISBN 978-1-4503-6936-7.
- ^ "Web Content Accessibility Guidelines (WCAG) 2.2". W3C. 12 December 2024. Retrieved 5 November 2025.
- ^ "ISO/IEC 29147:2018 – Information technology — Security techniques — Vulnerability disclosure". ISO. Retrieved 5 November 2025.
- ^ "Foundation Model Transparency Index". Stanford Center for Research on Foundation Models. 21 May 2024. Retrieved 5 November 2025.
- ^ Rishi Bommasani; Kevin Klyman; Sayash Kapoor; Shayne Longpre; Betty Xiong; Nestor Maslej; Percy Liang (2024). "The Foundation Model Transparency Index v1.1: May 2024". arXiv:2407.12929 [cs.CY].
- ^ Rashid Mushkani (2025). "Measuring What Matters: The AI Pluralism Index". arXiv:2510.08193 [cs.AI].
- ^ "AI Pluralism Index (AIPI)". aipluralism.wiki. Retrieved 5 November 2025.

- Promotional tone, editorializing and other words to watch
- Vague, generic, and speculative statements extrapolated from similar subjects
- Essay-like writing
- Hallucinations (plausible-sounding, but false information) and non-existent references
- Close paraphrasing
Please address these issues. The best way is usually to read reliable sources and summarize them, instead of using a large language model. See our help page on large language models.