Jump to content

64-bit computing: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
m copyedit - removed flying commas; removed date links per WP:MOS; punctuation, style
Replace unnamed link with a <ref>
Line 103: Line 103:
One common recurring problem is that some programmers assume that [[Pointer (computing)|pointers]] have the same length as some other data type. These programmers assume they can transfer quantities between these data types without losing information.
One common recurring problem is that some programmers assume that [[Pointer (computing)|pointers]] have the same length as some other data type. These programmers assume they can transfer quantities between these data types without losing information.
Those assumptions happen to be true on some 32-bit machines (and even some 16-bit machines), but they are no longer true on 64-bit machines.
Those assumptions happen to be true on some 32-bit machines (and even some 16-bit machines), but they are no longer true on 64-bit machines.
The [[C (programming language)|C programming language]] and its descendant [[C++]] make it particularly easy to make this sort of mistake. Differences between the [[C (programming language)#ANSI C and ISO C|C89]] and [[C (programming language)#C99|C99]] language standards also exacerbate the problem [http://groups.google.com/group/comp.lang.c/msg/82fdb7c12af4e6ba].
The [[C (programming language)|C programming language]] and its descendant [[C++]] make it particularly easy to make this sort of mistake. Differences between the [[C (programming language)#ANSI C and ISO C|C89]] and [[C (programming language)#C99|C99]] language standards also exacerbate the problem <ref>http://groups.google.com/group/comp.lang.c/msg/82fdb7c12af4e6ba</ref>


To avoid this mistake in C and C++, the <code>sizeof</code> operator can be used to determine the size of these primitive types if decisions based on their size need to be made, both at compile- and run-time. Also, the &lt;[[limits.h]]&gt; header in the [[C (programming language)#C99|C99]] standard, and numeric_limits class in &lt;limits&gt; header in the C++ standard, give more helpful info; sizeof only returns the size in ''[[Character (computing)|chars]]''. This used to be misleading, because the standards leave the definition of the <code>CHAR_BIT</code> macro, and therefore the number of bits in a ''char'', to the implementations. However, except for those compilers targeting [[Digital Signal Processor|DSP]]s, "64 bits == 8 chars of 8 bits each" has become the norm.
To avoid this mistake in C and C++, the <code>sizeof</code> operator can be used to determine the size of these primitive types if decisions based on their size need to be made, both at compile- and run-time. Also, the &lt;[[limits.h]]&gt; header in the [[C (programming language)#C99|C99]] standard, and numeric_limits class in &lt;limits&gt; header in the C++ standard, give more helpful info; sizeof only returns the size in ''[[Character (computing)|chars]]''. This used to be misleading, because the standards leave the definition of the <code>CHAR_BIT</code> macro, and therefore the number of bits in a ''char'', to the implementations. However, except for those compilers targeting [[Digital Signal Processor|DSP]]s, "64 bits == 8 chars of 8 bits each" has become the norm.

Revision as of 14:03, 4 November 2008

In computer architecture, 64-bit integers, memory addresses, or other data units are those that are 64 bits (8 octets) wide. Also, 64-bit central processing unit (CPU) and arithmetic logic unit (ALU) architectures are those that are based on registers, address buses, or data buses of that size.

64-bit CPUs have existed in supercomputers since the 1960s and in RISC-based workstations and servers since the early 1990s. In 2003 they were introduced to the (previously 32-bit) mainstream personal computer arena, in the form of the x86-64 and 64-bit PowerPC processor architectures.

A CPU that is 64-bit internally might have external data buses or address buses with a different size, either larger or smaller; the term "64-bit" is often used to describe the size of these buses as well. For instance, many current machines with 32-bit processors use 64-bit buses (e.g. the original Pentium and later CPUs), and may occasionally be referred to as "64-bit" for this reason. Likewise, some 16-bit processors (for instance, the MC68000) were referred to as 16-/32-bit processors as they had 16-bit buses, but had some internal 32-bit capabilities. The term may also refer to the size of an instruction in the computer's instruction set or to any other datum, e.g., 64-bit double-precision floating-point quantities are common). Without further qualification, "64-bit" computer architecture generally has integer registers that are 64 bits wide, which allows it to support (both internally and externally) 64-bit "chunks" of integer data.

Architectural implications

Registers in a processor are generally divided into three groups — integer, floating point, and other. In all common general purpose processors, only the integer registers are capable of storing pointer values (that is, an address of some data in memory). The non-integer registers cannot be used to store pointers for the purpose of reading or writing to memory, and therefore cannot be used to bypass any memory restrictions imposed by the size of the integer registers.

Nearly all common general purpose processors (with the notable exception of most ARM and 32-bit MIPS implementations) have integrated floating point hardware, which may or may not use 64-bit registers to hold data for processing. For example, the x86 architecture includes the x87 floating-point instructions which use eight 80-bit registers in a stack configuration; later revisions of x86, also include SSE instructions, which use eight 128-bit wide registers. By contrast, the 64-bit Alpha family of processors defines thirty two 64-bit wide floating point registers in addition to its thirty two 64-bit wide integer registers.

Memory limitations

Most CPUs are designed so that the contents of a single integer register can store the address (location) of any datum in the computer's virtual memory. Therefore, the total number of addresses in the virtual memory — the total amount of data the computer can keep in its working area — is determined by the width of these registers. Beginning in the 1960s with the IBM System/360, then (amongst many others) the DEC VAX minicomputer in the 1970s, and then with the Intel 80386 in the mid-1980s, a de facto consensus developed that 32 bits was a convenient register size. A 32-bit register meant that 232 addresses, or 4 GB of RAM, could be referenced. At the time these architectures were devised, 4 GB of memory was so far beyond the typical quantities (16 MB) available in installations that this was considered to be enough "headroom" for addressing. 4 GB addresses were considered an appropriate size to work with for another important reason: 4 billion integers are enough to assign unique references to most physically countable things in applications like databases.

However, by the early 1990s, the continual reductions in the cost of memory led to installations with quantities of RAM approaching 4 GB, and the use of virtual memory spaces exceeding the 4-gigabyte ceiling became desirable for handling certain types of problems. In response, a number of companies began releasing new families of chips with 64-bit architectures, initially for supercomputers and high-end workstation and server machines. 64-bit computing has gradually drifted down to the personal computer desktop, with some models in Apple's Macintosh lines switching to PowerPC 970 processors (termed "G5" by Apple) in 2002 and to 64-bit x86-64 processors in 2003 (with the launch of the AMD Athlon 64), and with x86-64 processors becoming common in high-end PCs.

The emergence of the 64-bit architecture effectively increases the memory ceiling to 264 addresses, equivalent to approximately 17.2 billion gigabytes, 16.8 million terabytes, or 16 exabytes of RAM. To put this in perspective, in the days when 4 MB of main memory was commonplace, the maximum memory ceiling of 232 addresses was about 1,000 times larger than typical memory configurations. Today, when over 2 GB of main memory is common, the ceiling of 264 addresses is about ten billion times larger, i.e., ten million times more headroom than the 232 case.

Most 64-bit microprocessors on the market today have an artificial limit on the amount of memory they can address, because physical constraints make it highly unlikely that one will need support for the full 16.8 million terabyte capacity. For example, the AMD Athlon X2 has a 40-bit address bus and recognizes only 48 bits of the 64-bit virtual address[1]. The newer Barcelona X4 supports a 48-bit of physical address and 48 bits of the 64-bit virtual address.

64-bit processor timeline

  • 1976: Cray Research delivers the first Cray-1 supercomputer, which is based on a 64-bit word architecture and would form the basis for later Cray vector supercomputers.
  • 1983: Elxsi launches the Elxsi 6400 parallel minisupercomputer. The Elxsi architecture has 64-bit data registers but a 32-bit address space.
  • 1993: DEC releases the 64-bit DEC OSF/1 AXP Unix-like operating system (later renamed Tru64 UNIX) and the OpenVMS operating system for Alpha systems.
  • 1994: Intel announces plans for the 64-bit IA-64 architecture (jointly developed with Hewlett-Packard) as a successor to its 32-bit IA-32 processors. A 1998 – 1999 launch date is targeted. SGI releases IRIX 6.0, with 64-bit support for R8000 CPUs.
  • 1995: Sun launches a 64-bit SPARC processor, the UltraSPARC.[4] Fujitsu-owned HAL Computer Systems launches workstations based on a 64-bit CPU, HAL's independently designed first-generation SPARC64. IBM releases the A10 and A30 microprocessors, 64-bit PowerPC AS processors.[5] IBM also releases a 64-bit AS/400 system upgrade, which can convert the operating system, database and applications. DEC releases OpenVMS 7.0, the first full 64-bit version of OpenVMS for Alpha.
  • 1996: Nintendo introduces the Nintendo 64 video game console, built around a low-cost variant of the MIPS R4000. HP releases an implementation of the 64-bit 2.0 version of their PA-RISC processor architecture, the PA-8000.[6]
  • 1997: IBM releases the RS64 line of 64-bit PowerPC/PowerPC AS processors.
  • 1999: Intel releases the instruction set for the IA-64 architecture. AMD publicly discloses its set of 64-bit extensions to IA-32, called x86-64 (later renamed AMD64).
  • 2001: Intel finally ships its 64-bit processor line, now branded Itanium, targeting high-end servers. It fails to meet expectations due to the repeated delays in getting IA-64 to market. Linux is the first operating system to run on the processor at its release.
  • 2003: AMD introduces its Opteron and Athlon 64 processor lines, based on its AMD64 architecture which is the first x86 based 64 bit processor architecture. Apple also ships the 64-bit "G5" PowerPC 970 CPU courtesy of IBM, along with an update to its Mac OS X operating system which adds partial support for 64-bit mode. Several Linux distributions release with support for AMD64. Microsoft announces plans to create a version of its Windows operating system to support the AMD64 architecture. FreeBSD releases with support for AMD64. Intel maintains that its Itanium chips would remain its only 64-bit processors.
  • 2004: Intel, reacting to the market success of AMD, admits it has been developing a clone of the AMD64 extensions named IA-32e (later renamed EM64T). Intel also ships updated versions of its Xeon and Pentium 4 processor families supporting the new instructions.
  • 2006: Sony, IBM, and Toshiba begin manufacturing of the 64-bit Cell processor for use in the PlayStation 3, servers, workstations, and other appliances.

32 vs 64 bit

A change from a 32-bit to a 64-bit architecture is a fundamental alteration, as most operating systems must be extensively modified to take advantage of the new architecture. Other software must also be ported to use the new capabilities; older software is usually supported through either a hardware compatibility mode (in which the new processors support the older 32-bit version of the instruction set as well as the 64-bit version), through software emulation, or by the actual implementation of a 32-bit processor core within the 64-bit processor (as with the Itanium processors from Intel, which include an x86 processor core to run 32-bit x86 applications). The operating systems for those 64-bit architectures generally support both 32-bit and 64-bit applications.

64-bit processors calculate particular tasks (such as factorials of large figures) twice as fast as working in 32-bit environments (given example is derived from comparison between 32-bit and 64-bit Windows Calculator; noticeable for factorial of say 100 000). This gives a general feeling of theoretical possibilities of 64-bit optimized applications.

One significant exception to this is the AS/400, whose software runs on a virtual ISA, called TIMI (Technology Independent Machine Interface) which is translated to native machine code by low-level software before being executed. The low-level software is all that has to be rewritten to move the entire OS and all software to a new platform, such as when IBM transitioned their line from the older 32/48-bit "IMPI" instruction set to 64-bit PowerPC (IMPI wasn't anything like 32-bit PowerPC, so this was an even bigger transition than from a 32-bit version of an instruction set to a 64-bit version of the same instruction set).

While 64-bit architectures indisputably make working with large data sets in applications such as digital video, scientific computing, and large databases easier, there has been considerable debate as to whether they or their 32-bit compatibility modes will be faster than comparably-priced 32-bit systems for other tasks. In x86-64 architecture (AMD64), the majority of the 32-bit operating systems and applications are able to run smoothly on the 64-bit hardware.

Sun's 64-bit Java virtual machines are slower to start up than their 32-bit virtual machines because Sun has only implemented the "server" JIT compiler (C2) for 64-bit platforms.[9] The "client" JIT compiler (C1), which produces less efficient code but compiles much faster, is unavailable on 64-bit platforms.

It should be noted that speed is not the only factor to consider in a comparison of 32-bit and 64-bit processors. Applications such as multi-tasking, stress testing, and clustering (for high-performance computing), HPC, may be more suited to a 64-bit architecture given the correct deployment. 64-bit clusters have been widely deployed in large organizations such as IBM, HP and Microsoft, for this reason.

Pros and cons

A common misconception is that 64-bit architectures are no better than 32-bit architectures unless the computer has more than 4 GB of memory. This is not entirely true:

  • Some operating systems reserve portions of process address space for OS use, effectively reducing the total address space available for mapping memory for user programs. For instance, Windows XP DLLs and userland OS components are mapped into each process's address space, leaving only 2 to 3.8 GB (depending on the settings) address space available, even if the computer has 4 GB of RAM. This restriction is not present in 64-bit operating systems.
  • Memory-mapped files are becoming less useful with 32-bit architectures, especially with the introduction of relatively cheap recordable DVD technology. A 4 GB file is no longer uncommon, and such large files cannot be memory mapped easily to 32-bit architectures; only a region of the file can be mapped into the address space, and to access such a file by memory mapping, those regions will have to be mapped into and out of the address space as needed. This is a problem, as memory mapping remains one of the most efficient disk-to-memory methods, when properly implemented by the OS.
  • Some programs such as data encryption software can benefit greatly from 64-bit registers (if the software is 64-bit compiled) and effectively execute 3 to 5 times faster on 64-bit than on 32-bit.

The main disadvantage of 64-bit architectures is that relative to 32-bit architectures the same data occupies more space in memory (due to swollen pointers and possibly other types and alignment padding). This increases the memory requirements of a given process and can have implications for efficient processor cache utilization. Maintaining a partial 32-bit model is one way to handle this and is in general reasonably effective. In fact, the highly performance-oriented z/OS operating system takes this approach currently, requiring program code to reside in any number of 32-bit address spaces while data objects can (optionally) reside in 64-bit regions.

Currently, most commercial x86 software is written in 32-bit code, not 64-bit code, so it does not take advantage of the larger 64-bit address space or wider 64-bit registers and data paths on x86 processors, or the additional registers in 64-bit mode. However, users of most RISC platforms, and users of free or open source operating systems have been able to use exclusive 64-bit computing environments for years. Not all such applications require a large address space nor manipulate 64-bit data items, so they wouldn't benefit from the larger address space or wider registers and data paths. The main advantage to 64-bit versions of such applications is the ability to access more registers in the x86-64 architecture.

Software availability

x86-based 64-bit systems sometimes lack equivalents to software that is written for 32-bit architectures. The most severe problem in Microsoft Windows is incompatible device drivers. Although most software can run in a 32-bit compatibility mode (also known as an emulation mode, e.g. Microsoft WoW64 Technology for IA64) or run in 32-bit mode natively (on AMD64), it is usually impossible to run a driver (or similar software) in that mode since such a program usually runs in between the OS and the hardware, where direct emulation cannot be employed. Currently the 64-bit versions for many existing device drivers are not available, so using 64-bit Microsoft Windows operating system can become frustrating as a result. However most devices made after February 2007 have 64-bit drivers available as well as many devices made in the later 2006 period.

Because device drivers in operating systems with monolithic kernels, and in many operating systems with hybrid kernels, execute within the operating system kernel, it is possible to run the kernel as a 32-bit process while still supporting 64-bit user processes. This provides the memory and performance benefits of 64-bit for users without breaking binary compatibility with existing 32-bit device drivers, at the cost of some additional overhead within the kernel. This is the mechanism by which Mac OS X enables 64-bit processes while still supporting 32-bit device drivers.

64-bit data models

Converting application software written in a high-level language from a 32-bit architecture to a 64-bit architecture varies in difficulty. One common recurring problem is that some programmers assume that pointers have the same length as some other data type. These programmers assume they can transfer quantities between these data types without losing information. Those assumptions happen to be true on some 32-bit machines (and even some 16-bit machines), but they are no longer true on 64-bit machines. The C programming language and its descendant C++ make it particularly easy to make this sort of mistake. Differences between the C89 and C99 language standards also exacerbate the problem [10]

To avoid this mistake in C and C++, the sizeof operator can be used to determine the size of these primitive types if decisions based on their size need to be made, both at compile- and run-time. Also, the <limits.h> header in the C99 standard, and numeric_limits class in <limits> header in the C++ standard, give more helpful info; sizeof only returns the size in chars. This used to be misleading, because the standards leave the definition of the CHAR_BIT macro, and therefore the number of bits in a char, to the implementations. However, except for those compilers targeting DSPs, "64 bits == 8 chars of 8 bits each" has become the norm.

One needs to be careful to use the ptrdiff_t type (in the standard header <stddef.h>) for the result of subtracting two pointers; too much code incorrectly uses "int" or "long" instead. To represent a pointer (rather than a pointer difference) as an integer, use uintptr_t where available (it is only defined in C99, but some compilers otherwise conforming to an earlier version of the standard offer it as an extension).

Neither C nor C++ define the length of a pointer, int, or long to be a specific number of bits. C99, however, defines several dedicated integer types with an exact number of bits.

In most programming environments on 32-bit machines, pointers, "int" types, and "long" types are all 32 bits wide.

However, in many programming environments on 64-bit machines, "int" variables are still 32 bits wide, but "long"s and pointers are 64 bits wide. These are described as having an LP64 data model. Another alternative is the ILP64 data model in which all three data types are 64 bits wide, and even SILP64 where "short" variables are also 64 bits wide[citation needed]. However, in most cases the modifications required are relatively minor and straightforward, and many well-written programs can simply be recompiled for the new environment without changes. Another alternative is the LLP64 model, which maintains compatibility with 32-bit code by leaving both int and long as 32-bit. "LL" refers to the "long long" type, which is at least 64 bits on all platforms, including 32-bit environments.

Many 64-bit compilers today use the LP64 model (including Solaris, AIX, HP, Linux, Mac OS X, FreeBSD, and IBM z/OS native compilers). Microsoft's VC++ compiler uses the LLP64 model. The disadvantage of the LP64 model is that storing a long into an int may overflow. On the other hand, casting a pointer to a long will work. In the LLP model, the reverse is true. These are not problems which affect fully standard-compliant code but code is often written with implicit assumptions about the widths of integer types.

Note that a programming model is a choice made on a per-compiler basis, and several can coexist on the same OS. However typically the programming model chosen by the OS API as primary model dominates.

Another consideration is the data model used for drivers. Drivers make up the majority of the operating system code in most modern operating systems (although many may not be loaded when the operating system is running). Many drivers use pointers heavily to manipulate data, and in some cases have to load pointers of a certain size into the hardware they support for DMA. As an example, a driver for a 32-bit PCI device asking the device to DMA data into upper areas of a 64-bit machine's memory could not satisfy requests from the operating system to load data from the device to memory above the 4 gigabyte barrier, because the pointers for those addresses would not fit into the DMA registers of the device. This problem is solved by having the OS take the memory restrictions of the device into account when generating requests to drivers for DMA, or by using an IOMMU.

64-bit data models
Data model short int long long long pointers
LP64 16 32 64 64 64
ILP64 16 64 64 64 64
SILP64 64 64 64 64 64
LLP64 16 32 32 64 64

Current 64-bit microprocessor architectures

64-bit microprocessor architectures (as of 2008) include:

Most 64-bit processor architectures can execute code for the 32-bit version of the architecture natively without any performance penalty. This kind of support is commonly called bi-arch support or more generally multi-arch support.

Images

In digital imaging, 64-bit refers to 48-bit images with a 16-bit alpha channel.

See also

References

  1. ^ AMD Athlon 64 X2 Dual-Core Processor Product Data Sheet, order number: 33425, revision 3.10, January 2007, Advanced Micro Devices, Inc.
  2. ^ Joe Heinrich: "MIPS R4000 Microprocessor User's Manual, Second Edition", 1994, MIPS Technologies, Inc.
  3. ^ Richard L. Sites: "Alpha AXP Architecture", Digital Technical Journal, Volume 4, Number 4, 1992, Digital Equipment Corporation.
  4. ^ Linley Gwennap: "UltraSparc Unleashes SPARC Performance", Microprocessor Report, Volume 8, Number 13, 3 October 1994, MicroDesign Resources.
  5. ^ J. W. Bishop, et al.: "PowerPC AS A10 64-bit RISC microprocessor", IBM Journal of Research and Development, Volume 40, Number 4, July 1996, IBM Corporation.
  6. ^ Linley Gwennap: "PA-8000 Combines Complexity and Speed", Microprocessor Report, Volume 8, Number 15, 14 November 1994, MicroDesign Resources.
  7. ^ F. P. O'Connell and S. W. White: "POWER3: The next generation of PowerPC processors", IBM Journal of Research and Development, Volume 44, Number 6, November 2000, IBM Corporation.
  8. ^ "VIA Unveils Details of Next-Generation Isaiah Processor Core". VIA Technologies, Inc. Retrieved 2007-07-18.
  9. ^ "Frequently Asked Questions About the Java HotSpot VM". Sun Microsystems, Inc. Retrieved 2007-05-03.
  10. ^ http://groups.google.com/group/comp.lang.c/msg/82fdb7c12af4e6ba


This article is based on material taken from the Free On-line Dictionary of Computing prior to 1 November 2008 and incorporated under the "relicensing" terms of the GFDL, version 1.3 or later.