Wikipedia talk:WikiProject Artificial Intelligence

There is a requested move discussion at Talk:Gemini (chatbot)#Requested move 17 September 2025 that may be of interest to members of this WikiProject. veko. (user | talk | contribs) he/him 16:56, 24 September 2025 (UTC)[reply]

Notice

The article 2045 in artificial intelligence has been proposed for deletion because of the following concern:

WP:CRYSTAL

While all constructive contributions to Wikipedia are appreciated, pages may be deleted for any of several reasons.

You may prevent the proposed deletion by removing the {{proposed deletion/dated}} notice, but please explain why in your edit summary or on the article's talk page.

Please consider improving the page to address the issues raised. Removing {{proposed deletion/dated}} will stop the proposed deletion process, but other deletion processes exist. In particular, the speedy deletion process can result in deletion without discussion, and articles for deletion allows discussion to reach consensus for deletion. Bearian (talk) 03:46, 8 October 2025 (UTC)[reply]

Request for draft review: Applied AI Ethics in Practice

[edit]

Request for draft review: Applied AI Ethics in Practice

[edit]

Hello! I’ve recently submitted a draft that outlines practical ethical concerns and regulatory standards related to AI systems, such as ISO/IEC 42001 and the EU AI Act. I would appreciate it if someone from this WikiProject could take a look or consider reviewing it:

Draft:Applied AI Ethics in Practice

Thanks in advance! --Veraium (talk) 09:15, 24 October 2025 (UTC)[reply]

Possible addition: semantic drift and fidelity issues in AI coverage

[edit]
[edit]

Hi all — I noticed that Wikipedia already has an article on Semantic drift, which describes how word meanings shift gradually over time in linguistics and computational models.

Given the current discussions here (e.g., Conversational AI, Applied AI Ethics in Practice), would it make sense to connect that page more explicitly to AI-related risks? In particular, some recent work distinguishes between:

  • **Factual errors** (hallucinations — incorrect information)
  • **Meaning errors** (semantic drift or fidelity decay — where words remain legible but hollow out or shift meaning when repeatedly generated or optimized across contexts)

This distinction could be helpful in articles that cover AI limitations, since it captures a different failure mode than just false facts. It also aligns with discussions around cultural/linguistic effects of large language models (sometimes described as “synthetic realness” or the “optimization trap”).

Would others here find it useful to explore whether secondary sources exist to support adding this connection between semantic drift and AI coverage? If so, we might consider a short section or a cross-reference in the relevant articles.

Knowledgedrift (talk) 13:28, 30 October 2025 (UTC)[reply]