64-Bit Computer a Complete Guide
64-Bit Computer a Complete Guide
In computer architecture, 64-bit integers, memory addresses, or other data units are
those that are at most 64 bits (8 octets) wide. Also, 64-bit CPU and ALU
architectures are those that are based on registers, address buses, or data buses of
that size. 64-bit is also a term given to a generation of computers in which 64-bit
processors were the norm.
64-bit CPUs have existed in supercomputers since the 1960s and in RISC-based
workstations and servers since the early 1990s. In 2003 they were introduced to the
(previously 32-bit) mainstream personal computer arena, in the form of the x86-64
and 64-bit PowerPC processor architectures.
Architectural implications
Processor registers are typically divided into several groups: integer, floating-
point, SIMD, control, and often special registers for address arithmetic which may
have various uses and names such as address, index or base registers. However, in
modern designs, these functions are often performed by more general purpose
integer registers. In most processors, only integer and/or address-registers can be
used to address data in memory, the other types cannot. The size of these registers
therefore normally limit the amount of directly addressable memory, even if there
are registers, such as floating-point registers, that are wider.
Most high performance 32-bit and 64-bit processors (some notable exceptions are
most ARM and 32-bit MIPS CPUs) have integrated floating point hardware, which
is often but not always, based on 64-bit units of data. For example, although the
x86/x87 architecture has instructions capable of loading and storing 64-bit (and 32-
bit) floating-point values in memory, the internal data and register format is 80-bit
wide. In contrast, the 64-bit Alpha family uses a 64-bit floating-point data and
register format (as well as 64-bit integer registers).
History
Most CPUs are designed so that the contents of a single integer register can store
the address (location) of any datum in the computer's virtual memory. Therefore,
the total number of addresses in the virtual memory — the total amount of data the
computer can keep in its working area — is determined by the width of these
registers. Beginning in the 1960s with the IBM System/360, then (amongst many
others) the DEC VAX minicomputer in the 1970s, and then with the Intel 80386 in
the mid-1980s, a de facto consensus developed that 32 bits was a convenient
register size. A 32-bit address register meant that 232 addresses, or 4 GB of RAM,
could be referenced. At the time these architectures were devised, 4 GB of memory
was so far beyond the typical quantities (16 MB) available in installations that this
was considered to be enough "headroom" for addressing. 4 GB addresses were
considered an appropriate size to work with for another important reason: 4 billion
integers are enough to assign unique references to most physically countable things
in applications like databases.
Some supercomputer processor architectures of the 1970s and 80s used registers up
to 64 bits wide. However, 32 bits remained the norm until the early 1990s, when
the continual reductions in the cost of memory led to installations with quantities
of RAM approaching 4 GB, and the use of virtual memory spaces exceeding the 4-
gigabyte ceiling became desirable for handling certain types of problems. In
response, MIPS and DEC developed 64-bit microprocessor architectures, initially
for high-end workstation and server machines. By the mid-1990s, HAL Computer
Systems, Sun Microsystems, IBM and Hewlett Packard had developed 64-bit
architectures for their workstation and server systems. A notable exclusion to this
trend were mainframes from IBM, which remained 32-bit. During the 1990s,
several low-cost 64-bit microprocessors were used in consumer electronics and
embedded applications. Notably, the Nintendo 64 and PlayStation 2 both had 64-
bit microprocessors before its introduction in personal computers. High-end
printers and network equipment, as well as industrial computers also used 64-bit
microprocessors such as the Quantum Effect Devices R5000. 64-bit computing
started to drift down to the personal computer desktop from 2003 onwards, when
some models in Apple's Macintosh lines switched to PowerPC 970 processors
(termed "G5" by Apple) and the launch of AMD's 64-bit x86-64 extension to the
x86 architecture, processors based on this architecture becoming common in high-
end PCs.
The emergence of the 64-bit architecture effectively increases the memory ceiling
to 264 addresses, equivalent to approximately 17.2 billion gigabytes, 16.8 million
terabytes, or 16 exabytes of RAM. To put this in perspective, in the days when 4
MB of main memory was commonplace, the maximum memory ceiling of 2 32
addresses was about 1,000 times larger than typical memory configurations.
Today, when over 2 GB of main memory is common, the ceiling of 264 addresses is
about ten trillion times larger, i.e., ten billion times more headroom than the 232
case.
Limitations
Most 64-bit microprocessors on the market today have an artificial limit on the
amount of memory they can address, because physical constraints make it
impossible to support the full 16.8 million terabyte capacity. For example, the
AMD Athlon X2 has a 40-bit address bus and recognizes only 48 bits of the 64-bit
virtual address. The newer Barcelona X4 supports a 48-bit physical address and 48
bits of the 64-bit virtual address.
1974: International Computers Limited launches the ICL 2900 Series with
32-bit, 64-bit, and 128-bit twos-complement integers; 64-bit and 128-bit
floating point; 32-bit, 64-bit and 128-bit packed decimal and a 128-bit
accumulator register. The architecture has survived through a succession of
ICL and Fujitsu machines. The latest is the Fujitsu Supernova, which
emulates the original environment on 64-bit Intel processors.
1976: Cray Research delivers the first Cray-1 supercomputer, which is based
on a 64-bit word architecture and would form the basis for later Cray vector
supercomputers.
1983: Elxsi launches the Elxsi 6400 parallel minisupercomputer. The Elxsi
architecture has 64-bit data registers but a 32-bit address space.
1991: MIPS Technologies produces the first 64-bit microprocessor, the
R4000, which implements the MIPS III ISA, the third revision of their MIPS
architecture.[2] The CPU is used in SGI graphics workstations starting with
the IRIS Crimson. However, 64-bit support for the R4000 would not be
included in the IRIX operating system until IRIX 6.2, released in 1996.
Kendall Square Research deliver their first KSR1 supercomputer, based on a
proprietary 64-bit RISC processor architecture running OSF/1.
1993: DEC releases the 64-bit DEC OSF/1 AXP Unix-like operating system
(later renamed Tru64 UNIX).
1994: Intel announces plans for the 64-bit IA-64 architecture (jointly
developed with Hewlett-Packard) as a successor to its 32-bit IA-32
processors. A 1998 to 1999 launch date is targeted. SGI releases IRIX 6.0,
with 64-bit support for the R8000 chip set.
1999: Intel releases the instruction set for the IA-64 architecture. AMD
publicly discloses its set of 64-bit extensions to IA-32, called x86-64 (later
renamed AMD64).
2000: IBM ships its first 64-bit ESA/390-compatible mainframe, the zSeries
z900, and its new z/OS operating system. 64-bit Linux on zSeries follows
almost immediately.
2001: Intel finally ships its 64-bit processor line, now branded Itanium,
targeting high-end servers. It fails to meet expectations due to the repeated
delays in getting IA-64 to market. Linux is the first operating system to run
on the processor at its release.
2003: AMD introduces its Opteron and Athlon 64 processor lines, based on
its AMD64 architecture which is the first x86 based 64 bit processor
architecture. Apple also ships the 64-bit "G5" PowerPC 970 CPU courtesy
of IBM, along with an update to its Mac OS X operating system which adds
partial support for 64-bit mode. Several Linux distributions release with
support for AMD64. Microsoft announces plans to create a version of its
Windows operating system to support the AMD64 architecture. FreeBSD
releases with support for AMD64. Intel maintains that its Itanium chips
would remain its only 64-bit processors.
2004: Intel, reacting to the market success of AMD, admits it has been
developing a clone of the AMD64 extensions named IA-32e (later renamed
EM64T). Intel also ships updated versions of its Xeon and Pentium 4
processor families supporting the new instructions.
2005: On January 31, Sun releases Solaris 10 with support for AMD64 and
EM64T processors. On April 30, Microsoft releases Windows XP
Professional x64 Edition for AMD64 and EM64T processors.
2006: Sony, IBM, and Toshiba begin manufacturing of the 64-bit Cell
processor for use in the PlayStation 3, servers, workstations, and other
appliances.
32 vs 64 bit
A change from a 32-bit to a 64-bit architecture is a fundamental alteration, as most
operating systems must be extensively modified to take advantage of the new
architecture. Other software must also be ported to use the new capabilities; older
software is usually supported through either a hardware compatibility mode (in
which the new processors support the older 32-bit version of the instruction set as
well as the 64-bit version), through software emulation, or by the actual
implementation of a 32-bit processor core within the 64-bit processor (as with the
Itanium processors from Intel, which include an x86 processor core to run 32-bit
x86 applications). The operating systems for those 64-bit architectures generally
support both 32-bit and 64-bit applications.
One significant exception to this is the AS/400, whose software runs on a virtual
ISA, called TIMI (Technology Independent Machine Interface) which is translated
to native machine code by low-level software before being executed. The low-level
software is all that has to be rewritten to move the entire OS and all software to a
new platform, such as when IBM transitioned their line from the older 32/48-bit
"IMPI" instruction set to 64-bit PowerPC (IMPI wasn't anything like 32-bit
PowerPC, so this was an even bigger transition than from a 32-bit version of an
instruction set to a 64-bit version of the same instruction set).
While 64-bit architectures indisputably make working with large data sets in
applications such as digital video, scientific computing, and large databases easier,
there has been considerable debate as to whether they or their 32-bit compatibility
modes will be faster than comparably-priced 32-bit systems for other tasks. In x86-
64 architecture (AMD64), the majority of the 32-bit operating systems and
applications are able to run smoothly on the 64-bit hardware.
Sun's 64-bit Java virtual machines are slower to start up than their 32-bit virtual
machines because Sun has only implemented the "server" JIT compiler (C2) for
64-bit platforms.[10] The "client" JIT compiler (C1), which produces less efficient
code but compiles much faster, is unavailable on 64-bit platforms.
Speed is not the only factor to consider in a comparison of 32-bit and 64-bit
processors. Applications such as multi-tasking, stress testing, and clustering—for
HPC (high-performance computing)—may be more suited to a 64-bit architecture
given the correct deployment. 64-bit clusters have been widely deployed in large
organizations such as IBM, HP and Microsoft, for this reason.
Software availability
x86-based 64-bit systems sometimes lack equivalents to software that is written for
32-bit architectures. The most severe problem in Microsoft Windows is
incompatible device drivers. Although most software can run in a 32-bit
compatibility mode (also known as an emulation mode, e.g. Microsoft WoW64
Technology for IA64) or run in 32-bit mode natively (on AMD64), it is usually
impossible to run a driver (or similar software) in that mode since such a program
usually runs in between the OS and the hardware, where direct emulation cannot
be employed. Because 64-bit drivers for most devices were not available until early
2007, using 64-bit Microsoft Windows operating system was considered
impractical. However the trend is changing towards 64-bit computing as most
manufacturers provide both 32-bit and 64-bit drivers nowadays.
Because device drivers in operating systems with monolithic kernels, and in many
operating systems with hybrid kernels, execute within the operating system kernel,
it is possible to run the kernel as a 32-bit process while still supporting 64-bit user
processes. This provides the memory and performance benefits of 64-bit for users
without breaking binary compatibility with existing 32-bit device drivers, at the
cost of some additional overhead within the kernel. This is the mechanism by
which Mac OS X enables 64-bit processes while still supporting 32-bit device
drivers.
To avoid this mistake in C and C++, the sizeof operator can be used to determine
the size of these primitive types if decisions based on their size need to be made,
both at compile- and run-time. Also, the <limits.h> header in the C99 standard, and
numeric_limits class in <limits> header in the C++ standard, give more helpful
info; sizeof only returns the size in chars. This used to be misleading, because the
standards leave the definition of the CHAR_BIT macro, and therefore the number
of bits in a char, to the implementations. However, except for those compilers
targeting DSPs, "64 bits == 8 chars of 8 bits each" has become the norm.
One needs to be careful to use the ptrdiff_t type (in the standard header <stddef.h>)
for the result of subtracting two pointers; too much code incorrectly uses "int" or
"long" instead. To represent a pointer (rather than a pointer difference) as an
integer, use uintptr_t where available (it is only defined in C99, but some
compilers otherwise conforming to an earlier version of the standard offer it as an
extension).
Neither C nor C++ define the length of a pointer, int, or long to be a specific
number of bits. C99, however, stdint.h provides names for integer types with
certain numbers of bits where those types are available.
Another consideration is the data model used for drivers. Drivers make up the
majority of the operating system code in most modern operating systems (although
many may not be loaded when the operating system is running). Many drivers use
pointers heavily to manipulate data, and in some cases have to load pointers of a
certain size into the hardware they support for DMA. As an example, a driver for a
32-bit PCI device asking the device to DMA data into upper areas of a 64-bit
machine's memory could not satisfy requests from the operating system to load
data from the device to memory above the 4 gigabyte barrier, because the pointers
for those addresses would not fit into the DMA registers of the device. This
problem is solved by having the OS take the memory restrictions of the device into
account when generating requests to drivers for DMA, or by using an IOMMU.
Most 64-bit processor architectures can execute code for the 32-bit version of the
architecture natively without any performance penalty. This kind of support is
commonly called bi-arch support or more generally multi-arch support.
CONTENTS
1 Architectural implications
2 History
3 Limitations
5 32 vs 64 bit