Talk:ASCII

Former featured articleASCII is a former featured article. Please see the links under Article milestones below for its original nomination page (for older articles, check the nomination archive) and why it was removed.
Article milestones
DateProcessResult
January 19, 2004Refreshing brilliant proseKept
December 30, 2005Featured article reviewKept
May 10, 2008Featured article reviewDemoted
Current status: Former featured article

Cleanup

[edit]

I made some bold removals as uncited opinion such as:

  1. Most modern character-encoding schemes When was this written? Unicode is the only modern encoding system in use today;
  2. ASCII has practically speaking been replaced (because of limited language support), e.g. with extended ASCII encodings, and most recently by Unicode (which supports all languages); its ASCII-compatible UTF-8 encoding (which is dominant on the web). ASCII only supports English (and few minority lanuages) and doesn't handle e.g. many loan words or given names of all American people. because
    1. "limited language support" is not the reason why ASCII was replaced;
    2. "extended ASCII" is a misnomer for ISO 8859-1. We should not feature errors in the lead, given that the true story is in the body;
    3. the reference to UTF-8 is way too detailed for the lead.
  3. replaced some instances of {{cite journal}} that I guessed used to be {{cite document}} (which atm redirects to cite journal). I guess someone is doing a cleanup in preparation for releasing cite document to do what it says on the tin. I used {{cite techreport}}, which is not ideal but {{cite standard}} has other issues.
  4. I severely edited Despite being an American standard, ASCII, unlike e.g. modern UTF-8 or other extended ASCII supersets, doesn't support symbols such as the cent, ¢ (or , ©), though it does support the dollar, $..
    1. UTF-8 is entirely irrelevant in this context
    2. € is not a US native character
    3. ¢ is the only really serious omission but I suspect that this was another case (like ~) where the designers hoped it would be met by backspace and overtype.
    4. © is just one of many symbols in common use that are not supported. So we give all or none.
    5. Middle English: predates 1776, I think? Anyway, just makeweight.

Obviously WP:BRD applies but anyone reverting needs to reinstate the CS1/2 fixes I applied.

An observation: the lead should summarise the body but looks to me to be thin on the technical content? 𝕁𝕄𝔽 (talk) 17:07, 7 November 2022 (UTC)[reply]

For the claim that "Kaypro CP/M computers used the 'upper' 128 characters for the Greek alphabet"--which is tagged as needing a citation--I could not find a clearly reliable source. Pac Veten (talk) 01:12, 15 March 2025 (UTC)[reply]

Need better choice for diacritics

[edit]

The word resume seems to be a poor choice as an example of the need for diacritics

https://novoresume.com/career-blog/how-to-spell-resume DGerman (talk) 20:36, 4 April 2023 (UTC)[reply]

I read that word entirely wrong and was confused until I spotted the word `career' in the provided link. Yeah, people ignore the spelling if the context is right... I suppose I mostly wonder if you have a better example. Vollink (talk) 20:32, 27 September 2023 (UTC)[reply]

Old Latin

[edit]

I'm not confident about this, but wasn't Old Latin written with a subset of the alphabet without diacritics? As opposed to Archaic or Classical Latin. DAVilla (talk) 02:11, 20 January 2024 (UTC)[reply]

Change the google topic to capital letters.

[edit]

Please capitalise. 183.87.191.150 (talk) 16:33, 19 September 2024 (UTC)[reply]

We aren't in control of that. Remsense ‥  16:34, 19 September 2024 (UTC)[reply]

ASCII-8

[edit]

The story for IBM System/360 is that, at the time, there was a proposed ASCII-8 standard that they could have used instead of EBCDIC, but the standard was never approved. It was not just ASCII-7 with new characters, but, using the stick notation, some of the sticks moved up. That could be mentioned in the 8 bit section. Gah4 (talk) 11:51, 17 November 2025 (UTC)[reply]

I think the proposal was to insert an extra digit into ASCII at the 2^5 place, so that binary xxxxxxx turned into xx0xxxxx. I have no idea why IBM was in favor of this, though a guess is that they wanted to put 64 control characters at the start? All 8-bit extensions of ASCII add the extra digit at 2^7, ie 0xxxxxxx. Spitzak (talk) 19:15, 17 November 2025 (UTC)[reply]