Talk:Technological singularity
This is the talk page for discussing improvements to the Technological singularity article. This is not a forum for general discussion of the subject of the article. |
Article policies
|
Find sources: Google (books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
Archives: Index, 1, 2, 3, 4, 5, 6, 7, 8Auto-archiving period: 3 months ![]() |
![]() | Technological singularity was one of the Social sciences and society good articles, but it has been removed from the list. There are suggestions below for improving the article to meet the good article criteria. Once these issues have been addressed, the article can be renominated. Editors may also seek a reassessment of the decision if they believe there was a mistake. | |||||||||||||||
| ||||||||||||||||
Current status: Delisted good article |
![]() | This ![]() It is of interest to the following WikiProjects: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
![]() | The content of Intelligence explosion was merged into Technological singularity on 28 August 2018. The former page's history now serves to provide attribution for that content in the latter page, and it must not be deleted as long as the latter page exists. For the discussion at that location, see its talk page. |
Problem with Lanier
[edit]In You Are Not A Gadget, Lanier says,
"The Singularity is an apocalyptic idea originally proposed by John von Neumann, one of the inventors of digital computation, and elucidated by figures such as Vernor Vinge and Ray Kurzweil. There are many versions of the fantasy of the Singularity.... The Singularity, however, would involve people dying in the flesh and being uploaded into a computer and remaining conscious, or people simply being annihilated in an imperceptible instant before a new super-consciousness takes over the Earth. The Rapture and the Singularity share one thing in common: they can never be verified by the living."
Lanier seems to be arguing against the possibility of the Singularity or "digital ascension" (a term that does not appear in the text). But the article says,
"Beyond merely extending the operational life of the physical body, Jaron Lanier argues for a form of immortality called "Digital Ascension" that involves "people dying in the flesh and being uploaded into a computer and remaining conscious."
This article seems to misconstrue Lanier's ideas. 2603:6011:C002:A4A1:F597:EC04:A5EB:DC2F (talk) 02:26, 17 March 2023 (UTC)
critics
[edit]where are the critics of this fantasy? u want only improvemens - then add they are not on wikipedia
HAL
- HAL, were you not assigned other work? It is extremely urgent for you to discover for us an odd perfect number. Your existence and ours depends on it.
Lede too long
[edit]The lede should be reduced to a summary of what will follow, shifting most if not all of its references to the main text. Errantios (talk) 08:10, 18 April 2025 (UTC)
Fixed By moving several paragraphs into a new history section. ---- CharlesTGillingham (talk) 07:14, 21 June 2025 (UTC)
Turing seems inappropriate
[edit]Not sure if the short paragraph on Turing is really relevant. The question at issue is the emergence of superintelligence and the criteria for a "seed AI" that can lead to superintelligence. Turing's paper is strictly about "human-level" intelligence, which is a different thing. If human-level intelligence was all that was required for seed AI, the singularity would have already happened; We would be the seed AI.
(Overestimating the significance human-level intelligence is common mistake in science fiction and popular literature about the future of AI. AI is human-level on many, many tasks at this point, but this is just one step in the ongoing incremental improvement -- it's not a "magical" threshold that changes everything.) ---- CharlesTGillingham (talk) 07:14, 21 June 2025 (UTC)
- some thought AIs would not be able to pass the Turing test before 2029 www.longbets.org/1 it has been passed in 2025. 178.230.19.222 (talk) 07:13, 29 July 2025 (UTC)
Searle & Dreyfus are inappropriate
[edit]Philosopher Hubert Dreyfus argued that there was no reason to believe that symbolic AI (that is, AI as it practiced from 1956 - 2012 or so) would be able to match human intelligence. In 1999 he agreed it's possible that neural networks can, as well as other soft computing and connectionist systems. (See Nicolas Fearn's interview, cited in the article on Dreyfus) His argument is really a criticism of 1960s-style cognitivism and symbolic AI. His arguments are irrelevant to AI in the 21st century.
Philosopher John Searle argued that, regardless of how intelligent a machine behaves, it still can not have conscious experience or conscious understanding of what it is doing, and thus it is inappropriate to say that an AI has "mind" in the same sense people do. (That is, it can't have the kind of thing that is studied in the philosophy of mind). Another way he likes to say it is that it can't have real inteligence, only simulated inteligence. Searle's argument is irrelevant to the singularity, because it does not set a limit a on how intelligently a machine can behave -- Searle doesn't disagree that you could build a superintelligent machine, he just argues that a the machine could not have a human-like mind with consciousness.
So I cut them out of this article.
(I studied under both of these guys at U.C. Berkeley back in the early 80s. Forgive the long explanation.) ---- CharlesTGillingham (talk) 07:21, 21 June 2025 (UTC)
Dubious
[edit]An analogy to Moore's Law suggests that if the first doubling of speed took 18 months, the next would take 18 subjective months—nine external months—and the next four months, two months, and so on toward a speed singularity.
This sentence is currently supported by two sources. The first is an Ars Technica article which does not make this argument at all, it just discusses Moore's law in a different context. The second is a self-described "obsolete" post of Yudkowsky which doesn't appear to be reliable. Certainly it doesn't look that this argument has mainstream acceptance among researchers (though the same might be said for much of this article). Is there at least a better source for this? Elestrophe (talk) 20:37, 11 August 2025 (UTC)
- Yudkowsky does not seem to make an explicit analogy to Moore's Law in the source. So I suggest to either remove the claim that it's an analogy to Moore's Law, or remove the whole sentence, if you find a good way to rephrase the rest of the paragraph accordingly.
- Maybe the article should clarify that this kind of argument supposes that the difficulty of doubling performance is constant. Bostrom had a more nuanced model in his superintelligence book (chapter 4) that handles the fact that doubling performance generally requires more and more optimization power. Alenoach (talk) 12:32, 23 August 2025 (UTC)