|This is the talk page for discussing improvements to the 64-bit computing article.
This is not a forum for general discussion of the article's subject.
|64-bit computing has been listed as a level-5 vital article in Technology. If you can improve it, please do. This article has been rated as C-Class.|
|WikiProject Computing / Software / Hardware||(Rated C-class, High-importance)|
|Threads older than 60 days may be archived by.|
- 1 Question to anser and add
- 2 32-bit vs 64-bit
- 3 When?
- 4 A bit of an overstatement when discussing the advantages of x86-64
- 5 A bit of an overstatement when discussing the advantages of x86-64
- 6 NetBSD and itanium
- 7 Drivers -- majority of OS code?
- 8 Title
- 9 48 bits for virtual memory
- 10 UNICOS 64 bit
- 11 Why does the table of data models lumps 'size_t' and pointers into a single column?
- 12 Current 64-bit microprocessor architectures is not enough... for encyclopaedia...
- 13 External links modified
- 14 Symbolics
Question to anser and add
How big would a 2^64 byte computer be with 2015 max density flash drives and cooling or harddrives etc. Or tape drives? — Preceding unsigned comment added by 188.8.131.52 (talk) 18:53, 8 February 2015 (UTC)
32-bit vs 64-bit
A 64-bit processor completely and entirely supports 16-bit and 32-bit without any "emulation" or "compatibility mode". Protected mode (32-bit) or long mode (64-bit) has to be explicitly enabled. The bootloader of every x86 / x64 operating system is written in 16-bit assembly, which then enables protected mode (32-bit) and then long mode (64-bit). The following text only applies to Windows, as is made more clear by the source .
"older 32-bit software may be supported through either a hardware compatibility mode in which the new processors support the older 32-bit version of the instruction set as well as the 64-bit version, through software emulation, or by the actual implementation of a 32-bit processor core within the 64-bit processor, as with the Itanium processors from Intel, which include an IA-32 processor core to run 32-bit x86 applications. The operating systems for those 64-bit architectures generally support both 32-bit and 64-bit applications." 184.108.40.206 (talk) 21:51, 16 November 2013 (UTC)
- I assume you're referring specifically to x86 processors here, from the reference to "protected mode" and "long mode". The Itanium instruction set started out as a 64-bit instruction set, so there are no "16-bit" or "32-bit" instructions to support, except by convention, and, as far as I know, there were never any compilers generating 16-bit or 32-bit code for it. The DEC Alpha instruction also started out as a 64-bit instruction set, although Windows NT and Microsoft's compilers ran it with 32-bit pointers, and Digital UNIX had a "taso" ("truncated address space option") compiler mode (presumably for programs that didn't need a large address space and could save memory and have a smaller cache footprint with 32-bit pointers), but those didn't constitute an older 32-bit address space to support backwards compatibility.
- In the case of x86-64, no, I wouldn't describe the ability to run IA-32 code as a "hardware compatibility mode", any more than I'd describe the ability of 64-bit PowerPC processors to run 32-bit PowerPC code or the ability of SPARC v9 processors to run SPARC v7 or SPARC v8 code or the ability of z/Architecture processors to run System/360/System/370/System/390 code or... as a "hardware compatibility mode".
- I'll revise that text (in a fashion that is not x86-specific!). Guy Harris (talk) 22:22, 16 November 2013 (UTC)
It would be good if statements like this
Currently, most proprietary x86 software is compiled into 32-bit code, with less being also compiled into 64-bit code (although the trend is rapidly equalizing)
were dated so the reader knows when "Currently" was.
A bit of an overstatement when discussing the advantages of x86-64
The article says
- This is a significant speed increase for tight loops since the processor doesn't have to go out the second level cache or main memory to gather data if it can fit in the available registers.
but x86 processors typically have an L1 data cache, and that's been true for quite a while (dating back, as I remember, at least as far as the first Pentium), so even in 32-bit mode it's not as if references to anything not in a register have to go to the L2 cache or main memory. Guy Harris (talk) 20:23, 12 December 2010 (UTC)
- I've seen some comments that x86 is not really all that register-starved, not since the implementations started including a register file with register renaming. Among other things, the register file sort of acts like an "L0 cache", avoiding even having to go to the L1 cache when reloading a register from where it was saved. Additional architectural registers would likely be of more benefit to the assembly language programmer and to the optimizing compiler. Jeh (talk) 21:01, 12 December 2010 (UTC)
- ...and you can get smaller machine code by using registers rather than, say, on-stack locations. I wouldn't be surprised at a performance win from the increased number of registers, but it'd be interesting to see measurements. (It'd also be interesting to see how much of an improvement comes from changing the ABI, e.g. passing arguments in registers, although some of the reason why they switched to passing arguments in registers is that there were more registers available.) Guy Harris (talk) 22:53, 12 December 2010 (UTC)
A bit of an overstatement when discussing the advantages of x86-64
Something completely overlooked in the article is the penalty of address translation. 64-bit addresses require more effort in translating to real addresses. See  p 3-41 (117 in the PDF file) about how the IBM Z/Architecture translates addresses for example. Certainly translation lookaside buffers help here, but the penalty for the cache miss seems to be bigger. How much this translates into a performance penalty I don’t know, but the concept deserves discussion.Jhlister (talk) 01:56, 23 April 2011 (UTC)
NetBSD and itanium
NetBSD was not running on Itanium when it was released. Is it even running on itanium now? see http://www.netbsd.org/ports/ia64/ — Preceding unsigned comment added by Nros (talk • contribs) 16:23, 23 April 2011 (UTC)
- Yes, seems unlikely when the NetBSD/ia64 port seems to have started in 2005 . It's possible IA-64 has been confused with x86-64 here, since both NetBSD and Linux were ported to x86-64 in 2001. Letdorf (talk) 18:36, 29 April 2011 (UTC).
Drivers -- majority of OS code?
The following passage:
Drivers make up the majority of the operating system code in most modern operating systems (...)
is a big and surprising claim. Unless the view of OS is narrowed down to just kernel, majority of code would be programs that make up the OS shell and/or included basic applications. I don't have any hard numbers to back it up, but it could use at least some clarification. Dexen (talk) 20:41, 3 May 2011 (UTC)
- Even if you do narrow it down to the kernel, how much of the kernel-mode code is device drivers, as opposed to, for example, file systems, network protocols up to the transport layer, the virtual memory subsystem, etc.? Guy Harris (talk) 18:05, 4 May 2011 (UTC)
Before I saw the funny templated lead, I moved the article to a noun-phrase title, as WP:TITLE suggests. The Template:N-bit used to say something like "N-bit is an adjective...", which is true. Now it's used to make an awkward and hard-to-improve lead. This is very bogus, I think. Let's start with a sensible title, and use italics when we discuss a term, instead of use it, as in normal style. If someone has a better idea for a title or approach, let us know. Dicklyon (talk) 06:26, 31 July 2012 (UTC)
- So presumably you'll also update 4-bit, 8-bit, 12-bit, 16-bit, 18-bit, 24-bit, 31-bit (although it mainly discusses 32-bit architectures with 31-bit addressing), 32-bit, 36-bit, 48-bit, 60-bit, and 128-bit?
- That's why I referred to the template in my note. We can look at all the others, too, sure. I'd be surprised if they should really have such parallel leads. Dicklyon (talk) 15:38, 31 July 2012 (UTC)
- At least some of them, if turned into "N-bit computing", could lose some sections, e.g. 16-bit mainly talks about 16-bit computing, but has a section about "16-bit file formats" (of which I'm not sure there are enough to render that interesting) and one about "16-bit memory models" (which really means "x86 memory models when not running in 32-bit or 64-bit mode"), and 48-bit has a section about 16-bit-per-color-channel images, so at least some "N-bit" pages could turn into disambiguation pages. Guy Harris (talk) 20:12, 31 July 2012 (UTC)
- Well, as noted, it's not clear 16-bit, for example, has a scope; it discusses various unrelated or semi-related flavors of 16-bitness, and if you change the title to give a clue to the topical scope, some items may fall out of scope just as a consequence of choosing a different title. Guy Harris (talk) 23:20, 31 July 2012 (UTC)
- I did, but I eated it^W^Wfixed it. The "Images" section is gone, the information in it is in Color depth#Deep color (30/36/48-bit), and there's a hatnote pointing people there if they got here via the 64-bit redirection and were interested in 64-bit images rather than 64-bit processors. Guy Harris (talk) 03:55, 1 August 2012 (UTC)
- By the way, it's not clear to me why the "at most" is in there. Anyone know? Dicklyon (talk) 15:39, 31 July 2012 (UTC)
- But "those that are at most 32 bits (4 octets) wide" would include those that are 16 bits wide, as I read it, so a machine of all 16-bit datapaths and elements would fit the definition of 32-bit here. The limitation is not well expressed, nor is its intent or meaning discernible. Dicklyon (talk) 22:20, 31 July 2012 (UTC)
- Oh, one more thing wrong with the template is that it adds a sentence "N-bit is also a term given to a generation of computers in which N-bit processors are the norm.", regardless of whether there was ever such a generation or not (1-bit computers were the norm at any point? I don't think so - and even if N-bit processors were the most common by volume, they weren't necessarily the "norm", e.g. in an era of 8-bit micros there were plenty of 16-bit minicomputers and 32-bit mainframes). Guy Harris (talk) 04:12, 1 August 2012 (UTC)
OK, I've rewritten the lead, skipping the N-bit template but using the box that it transcluded. I'm open to feedback and improvements on the lead paragraphs. If we think this is a good direction, we can start to do analogous things in some of the others. Dicklyon (talk) 03:39, 1 August 2012 (UTC)
- I might be tempted to leave out the bus widths, as there might be 64-bit processors with wider data buses (as the data bus to memory, at least, can be wider, as the machine might fetch 128 bits or more in the process of filling a cache line).
- For other N-bit articles, the address bus is unlikely to be wider than the register width, but in some older processors I think there were narrower address buses, with the address put onto the bus with multiple bus cycles. In addition, while 64-bit machines have "64-bit addresses" in the sense that the processor doesn't ignore any bits of the address, there were 32-bit processors that ignored the upper 8 bits of the address (System/360s other than the System/360 Model 67, pre-XA System/370s, Motorola 68000, and Motorola 68010) and 32-bit processors that ignored the upper bit of the address (System/370-XA and later). There's also processors that didn't have programmer-visible general-purpose-style registers, such as the stack-oriented (48-bit) Burroughs large systems and (16-bit) "classic" HP 3000 machines, but, in the case of stack machines, the equivalent of the register width is the width of expression stack elements. Guy Harris (talk) 04:23, 1 August 2012 (UTC)
- 1-bit architecture has the same issue, so I've mentioned this discussion here Talk:1-bit_architecture. Widefox (talk) 20:15, 28 August 2012 (UTC)
- before we start moving more individual pages (like 48-bit without updating the two templates to eliminate the redirects) can we reach consensus here first please. I'll throw a suggestion in.... DABs at the (adjectives) "n-bit" with list articles at "List of n-bit computers" Widefox (talk) 22:12, 30 August 2012 (UTC)
- I think we should just start moving them. What redirects are concerning you? While we're at it, we should get rid of the templates that are making them impossible to improve. Can we do that by putting "subst:" into them? Seems to work; I did that at 48-bit computing. Dicklyon (talk) 06:18, 31 August 2012 (UTC)
- Have we agreed to this mass rename? What about the colour and sound parts of the articles? what about my suggestion above? Anyhow, Template:CPU_technologies has the definitive list of them (not the navbox), here it is: 1-bit architecture 4-bit 8-bit (no 9-bit) (no 10-bit) 12-bit (no 15-bit) 16-bit 18-bit (no 22-bit) 24-bit (no 25-bit) (no 26-bit) (no 27-bit) 31-bit 32-bit (no 33-bit) (no 34-bit) 36-bit (no 39-bit) (no 40-bit) 48-bit computing (no 50-bit) 60-bit 64-bit computing 128-bit 256-bit .
- where "no" means a link from the template to a/the computer as there's no article yet. Haven't thought about your template problem - can't you just fix the template? Coincidentally, I just fixed up 8-bit (disambiguation). Widefox (talk) 12:06, 31 August 2012 (UTC)
- Didn't answer your question: when you rename (and of course create a redir to the new article name) the two templates then link to the redirs which breaks the navigation (bold). i.e. they need updating too. Widefox (talk) 12:11, 31 August 2012 (UTC)
Trying to restart the discussion, drawing on the above discussion...is there consensus for:
- "n-bit" are redirects as primary meaning to
- "n-bit computing" articles (scope of hardware and software)
- "n-bit (disambiguation)" become DAB pages with redirects as primary meaning
- An alternative of splitting "n-bit computing" into hardware and software articles (say "n-bit architecture" "n-bit application") seems overkill with these short articles, and can be handled in the DAB for those that are already split
Five years later
So the punchline to all of this, five years down the road, is that the only article other than this one that had been renamed (that I can tell), the 48-bit article, was subsequently reverted back from 48-bit computing to simply 48-bit. This was done by a well-established Wikipedian, and I must therefore assume non-capriciously, with the edit summary "
No one (I think) says 48-bit computing". So, an application of WP:COMMONNAME, which is fair.
Meanwhile, policy may have evolved slightly; the phrase "Titles are often proper nouns" no longer appears at the opening of WP:COMMONNAME, although WP:NOUN does still admonish editors to "use nouns". Regardless, all of the other n-bit articles are named as such, except this one, and I find that inconsistency most troubling of all. Indeed, Consistency is one of the
Five Pillars Five Legs on the Stool of Article Naming, as listed at WP:CRITERIA.
Though everyone who participated in this discussion originally may be completely sick of the entire topic, and I wouldn't blame you in the slightest, I thought I'd at least make an attempt at revisiting it. Since all evidence points to a mass article move to the N-bit computing format would be a hard sell (if not impossible), and since this article currently stands as the lone outsider, I find myself in the disappointing position of thinking that it should probably be moved back to 64-bit for the sake of simple consistency. Thoughts? -- FeRD_NYC (talk) 09:24, 14 February 2018 (UTC)
- I find the "titles should be nouns" argument compelling. If an article is about a thing, or a concept, that thing or concept has a name and the name - a noun - should be the article title. Even if it isn't a "roper" name (which is something we would normally write in Title Case). A title like "48-bit" is not a noun. I think all of these articles should be "n-bit computing". I will object to moving this article to "64-bit" until my last day on Wikipedia. Jeh (talk) 12:00, 14 February 2018 (UTC)
- @Jeh: I find the nouns argument compelling as well. But I find the consistency argument equally compelling, so my struggle (and my goal) here is trying to find some way to balance the two. Renaming all of the articles to "n-bit computing" is one way to achieve that, but seems like an uphill battle.
- I'm also a bit on the fence about whether it's the correct approach. WP:COMMONNAME is also in play here; while I personally don't find it quite as compelling as the other arguments, it is accepted policy and a factor in the article-titling decision. I decided to do a little digging into Christian75's argument that "no one [...] says 48-bit computing". I first hit Google Trends, to examine common search terms, but things didn't go that well there. Terms like "32-bit computing", "32-bit architecture", "32-bit processor", etc. all came up bust except for "64-bit computing". That term they both have search frequency data for, and list as a Topic — though it's hard to say whether the latter is truly organic, or if the title of this very article may have influenced it. But it does appear true, based on Trends, that "nobody says" "32-bit computing", "48-bit computing", "16-bit computing", etc.
- So then I decided, to heck with the web, let's find out what book authors say, and I decamped from Trends to Ngrams. Restricting the search to books from 1970 – 2008 (the latest year available), I had it plot the frequency of "n-bit" uses for the most common values of N:
- (Note: The spaces around the hyphen in that plot's term(s) are, apparently, necessary "to match how we processed the books". If you attempt an Ngrams search for e.g. "32-bit" it'll correct it for you and display that message.)
- So then I had it plot the frequency of those other phrases I mentioned, "n-bit computing", "n-bit architecture", "n-bit processor", "n-bit microprocessor":
- That view has the matching "n-bit computing" phrases selected for highlighting, along with a few of the other noun phrases which appear more frequently. The reason these aren't all on one chart, BTW, is that the "n-bit" matches in the first chart are a full two orders of magnitude greater than the matches on the second chart... if you plot them together, the entire second chart collapses into the X axis line.
- So I guess there are at least a few different ways all that could be interpreted:
- Everything is right the way it is now, with "64-bit computing" different from the rest, since that's the only version of the "n-bit computing" phrase that shows up with any discernible frequency.
- Leaving the other articles as "n-bit" is the right way to go, and this one should be renamed as well, because it's indeed true that no one writes "n-bit computing". (Not even relative to the vanishingly small frequency with which they use phrases like "32-bit processor" or "16-bit microprocessor".)
- We should find names for the articles based around phrases that people actually do use, like "64-bit processor" and "8-bit microprocessor" (which are at least nouns), and possibly pare off some of their content that deals with other aspects of 64-bit/8-bit to other articles, because "64-bit computing" and "8-bit computing" and etc. are not conceptually discussed as such, they're discussed in real-world terms based around the physical devices which perform computations utilizing operands of the given size.
- I should go away and quit poking this bear. -- FeRD_NYC (talk) 13:16, 16 February 2018 (UTC)
- So I guess there are at least a few different ways all that could be interpreted:
48 bits for virtual memory
"For example, the AMD64 architecture as of 2011 allowed 52 bits for physical memory and 48 bits for virtual memory." - since then, it's 64 bit for virtual memory, if I understand correctly the new edition of "AMD64 Programmer's Manual Volume 2" . SyP (talk) 20:14, 12 July 2013 (UTC)
- Virtual addresses in AMD64 are (and always have been) 64 bits wide, and all 64 bits must be correctly specified, but only 48 bits' worth of VAS is implemented. Bits 0 through 47 are implemented, and bits 48 through 63 must be the same as bit 47. This has not changed since 2011. Intel64 does the same thing. X86-64#Virtual_address_space_details for more explanation, with diagrams. Jeh (talk) 18:28, 27 March 2015 (UTC)
48bit Physical Address Space not 48bit Virtual Address Space. On AMD64, 64bit Virtual Addresses are translated to 48bit Physical Addresses in RAM. @Syp is right, I've read the same manual but updated for 2017...Pages 31, 55, 56 and others.. 
- To quote AMD64 Architecture Programmer’s Manual Volume 2: System Programming, section 5.1 "Page Translation Overview", on page 120, "Currently, the AMD64 architecture defines a mechanism for translating 48-bit virtual addresses to 52-bit physical addresses." So that's a 52-bit physical address space. and currently a 48-bit virtual address space; they then say "The mechanism used to translate a full 64-bit virtual address is reserved and will be described in a future AMD64 architectural specification." Guy Harris (talk) 16:47, 25 August 2017 (UTC)
- IP, you are not making a correct distinction between the number of bits in a virtual address and the number of bits of virtual address that are actually translated and available for the programmer (or compiler + linker + loader, etc.) to use. The latter is what defines the size of virtual address space, ie the number of usable (AMD uses the term "canonical") virtual addresses.
- Your note that "AMD make a clear distinction between Virtual Addresses and Physical Address Space" is correct, but not salient. The issue here is not dependent on any confusion between virtual and physical address space.
- In AMD64 / Intel 64, virtual addresses are 64 bits wide, but the only thing the MMU does with the high 16 bits - bits 48 through 63 - is check to be sure that they are all the same as bit 47. That means that once you've decided what will be in bit 47, bits 48 through 63 are also determined. So bits 48 through 63 do not really contribute to the size of virtual address space, only to the size of a virtual address as it is stored and loaded.
- An analogy: I don't know how phone numbers are formatted in NZ... but in the US we have a three-digit area code (similar to the "city code" used in many countries), a three-digit central office code, and a four-digit number within the CO. So for each area code you might think you can have phone numbers from 000-0000 through 999-9999, 10 million possible numbers. Well, no. There are a lot of rules that preclude the use of significant swaths of the seven-digit "number space" within each area code. For example, all n11 CO codes are typically unusable, because three-digit numbers of that form have special meanings: 411 gets you to directory assistance, 911 for emergency assistance, sometimes 511 or 611 for telco customer service, etc. A CO code can't start with "0" or "1" because "0" all by itself gets you to the operator, and a leading "1" means "area code follows" (in most places you don't have to dial an area code if the number you're calling is in the same AC as you). There are other limitations (see NANP if you care) but the point here is simply this: because of rules that restrict the choice of numbers that can be used, the usable "phone number space" within an area code is considerably smaller than one might infer from the simple fact that there are seven digits.
- Nevertheless the phone numbers within the AC are indisputably seven digits wide.
- Similar is true here. Virtual addresses, as appear in RIP, RSP, etc., are 64 bits wide. But the MMU can only translate virtual addresses that lie in the ranges from 0 to 7FFF`FFFFFFFF inclusive, and from FFFF8000`00000000 to FFFFFFFF`FFFFFFFF inclusive. If bit 47 is 0, then bits 48 through 63 must also be zero, and if bit 47 is 1, then bits 48 through 63 must also be 1. Attempting to reference any address not within either of these ranges results in an exception (see AMD64 Architecture Programmer’s Manual Volume 2: System Programming, section 5.3.1 "Canonical Address Form" - currently, the "most-significant implemented bit is bit 47). Only bits 0 through 47 - 48 bits total - participate in the address translation scheme, as illustrated in the same manual: Figure 5-17 on page 132. So only 48 bits out of the 64-bit virtual address are actually translated, and the usable virtual address space is 2 to the 48th bytes (256 TiB or about 281 TB), not 2 to the 64th (16 EiB) or about 18 EB). Jeh (talk) 18:54, 25 August 2017 (UTC)
- As for the 52-bit physical address, refer to figure 5-21, the format of a page table entry in long mode. Bits 12 through 51 (that's 40 bits) of the PTE provide the high order 40 bits of the "physical page base address" (the low-order 12 bits of this address are assumed to be zero). (This 40-bit number is also called the "physical page number", or by some OSs including Windows, the "page frame number". PFNs go from 0 through one less than the number of physical pages of RAM on the machine.) Append the low-order 12 bits from the original virtual address being translated to get the byte offset within the page, as shown in figure 5-17. There's your 52 bits. Jeh (talk) 22:08, 25 August 2017 (UTC)
UNICOS 64 bit
I'm not sure that short int in UNICOS is 64 bit long. Here some docs. http://docs.cray.com/books/S-2179-50/html-S-2179-50/rvc5mrwh.html#FTN.FIXEDPZHHMYJC2 — Preceding unsigned comment added by 220.127.116.11 (talk) 18:48, 14 October 2013 (UTC)
- On the other hand, http://docs.cray.com/books/004-2179-001/html-004-2179-001/rvc5mrwh.html#QEARLRWH. Guy Harris (talk) 19:51, 14 October 2013 (UTC)
- Remember that there have been several different OSs called UNICOS running on various different architectures. That Cray C and C++ Reference Manual seems to be referring to UNICOS/mp, which was actually based on SGI IRIX 6.5, rather than "classic" UNICOS. Regards, Letdorf (talk) 23:18, 15 October 2013 (UTC).
- Exactly. The manual 18.104.22.168 cited was for UNICOS/mp; the one I cited was for a more "classic" UNICOS.
- The manual I cited says that short int used 64 bits of memory, but that, apparently, not all those bits were used; on all but the T90, it only used 32 bits, and, on the T90, it only used 46 bits. I think the older Crays were word-addressible, and they may not have bothered adding byte-oriented addressing except for char and variants thereof, so they just stuffed short int into a word. Guy Harris (talk) 23:45, 15 October 2013 (UTC)
Why does the table of data models lumps 'size_t' and pointers into a single column?
In C and C++ languages the width of 'size_t' is not related to the width of pointer types. Integer types whose width is tied to that of pointers are called 'intptr_t' and 'unitptr_t'. In general case, width of 'size_t' is smaller or equal to that of pointer types. This is a rather widespread error to believe that 'size_t' is somehow supposed to have the same width as pointers, apparently caused by wide adoption of "flat memory" platforms. 22.214.171.124 (talk) 23:02, 5 January 2014 (UTC)
Current 64-bit microprocessor architectures is not enough... for encyclopaedia...
I offer to be made new section:
History and museum grade 64-bit microprocessor architectures
- There's already a "64-bit processor timeline" section, which covers the architectures that are no longer made, as well as the current ones. Guy Harris (talk) 09:52, 7 May 2014 (UTC)
- On first thought when I do my comment, I was thinking like there to be two lists... one list of current 64-bit architectures... and one list of such one not in mass production and use. In this way, we can more easily find out if some CPU is missing and to add it, instead of to traverse the history tree above. Compared once again. In now, we have to walk over the timeline to see whether some architecture is there or nor. With separate list, where the architectures of historical or research value are selected closely... it is faster. But on second thought when I read your comment, I see and I think that this issue is not so important. I mean, may be they do not deserve separate list to be more boldly represented in the article. Single list of current architectures is better compared to single list of all architectures inside. I continue to think that it's better there to be list of architectures not used by modern software... But now I have doubts... if modern software don't use them... should they deserve more attention on this article? I accept any opinions. I like the timeline. I think adding in parallel with the timeline and one list too... list placed in close proximity to the other list of architectures used by modern software. — Preceding unsigned comment added by 126.96.36.199 (talk) 15:07, 7 May 2014 (UTC)
Hello fellow Wikipedians,
I have just modified 2 external links on 64-bit computing. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
- Added archive https://web.archive.org/web/20071011053054/http://via.com.tw/en/resources/pressroom/2004_archive/pr041005_fpf-isaiah.jsp to http://www.via.com.tw/en/resources/pressroom/2004_archive/pr041005_fpf-isaiah.jsp
- Added archive https://web.archive.org/web/20100910014057/http://www.x86-64.org/pipermail/announce/2001-June/000020.html to http://www.x86-64.org/pipermail/announce/2001-June/000020.html
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
An editor has reviewed this edit and fixed any errors that were found.
- If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
- If you found an error with any archives or the URLs themselves, you can fix them with this tool.
- The first works; the second takes you to a mailing list item with a broken link to the paper being cited. For that one, I pointed directly to the paper, instead. Guy Harris (talk) 17:35, 23 June 2017 (UTC)