0% found this document useful (0 votes)
159 views29 pages

Micro Pros Essor

The document discusses microprocessors and their components. It describes how microprocessors contain the central processing unit on a single integrated circuit. It then explains the different parts of a microprocessor, including the system clocks, data bus, address bus, and cache bus. It provides details on how these components work and comparisons of their specifications across different processor models.

Uploaded by

Andrew Henderson
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
159 views29 pages

Micro Pros Essor

The document discusses microprocessors and their components. It describes how microprocessors contain the central processing unit on a single integrated circuit. It then explains the different parts of a microprocessor, including the system clocks, data bus, address bus, and cache bus. It provides details on how these components work and comparisons of their specifications across different processor models.

Uploaded by

Andrew Henderson
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 29

microprocessor

Time:2010-01-29 01:45From:Elec_Intro Editor:Tiny Click:15423


To save money

Save money on

Micropr oces sor

Money Managers

Money Saving

Microprocessor - Wikipedia, the free encyclopedia microprocessor n. An integrated circuit that contains the entire central processing unit of a computer on a single . use microprocessor-controlled ignition . -> HowStuffWorks "How Microproce

Microprocessor - Wikipedia, the free encyclopedia microprocessor n. An integrated circuit that contains the entire central processing unit of a computer on a single . use microprocessor-controlled ignition . ->

HowStuffWorks "How Microprocessors Work" Microprocessors are at the heart of all computers. Learn how microprocessors work and about microprocessor technology and development. Show Enhanced Format for this Resultwww.howstuffworks.com/microprocessor.htm - 57k - CachedWhat is microprocessor? - a definition from Whatis.comA microprocessor is a computer processor on a microchip. It's sometimes called a logic chip. . Typical microprocessor operations include adding, subtracting, . ->

Dedicated "Backside" Cache Bus Conventional processors use level 2 cache on the motherboard and connect to it using the standard memory bus arrangement. To achieve better performance, many newer processors use a dedicated high-speed bus to connect the processor to the level 2 cache. For example, the standard Pentium 200 runs on a 66 MHz system bus, and the system cache runs at this speed as well, but the Pentium Pro 200 has an integrated level 2 cache that runs at full processor speed--200 MHz. A special backside bus manages this high-speed data link between the processor and the level 2 cache (which is entirely within the Pentium Pro package, since it contains both the processor and level 2 cache). The Pentium II processor has a compromise arrangement that is similar; it runs at half the processor speed, so a 266 MHz Pentium II runs its cache bus at 133 MHz (much slower than the Pentium Pro but much faster than the Pentium). Both of these buses are transactional (non-blocking) so they allow for concurrent requests to the system cache, greatly improving performance. Another advantage of this design is that having separate caches and buses to run them is far superior for multiprocessing. Not only does each processor have its own cache without having to share a single one on the motherboard, each cache has an independent, non-interfering bus to service it.

Note: Intel calls the use of a separate bus for the cache and memory Dual Independent Bus (DIB) architecture. Processor / Memory Data Bus Every bus is composed of two distinct parts: the data bus and the address bus. The data bus is what most people refer to when talking about a bus; these are the lines that actually carry the data being transferred. The wider the data part of the bus, the more information that can be transmitted simultaneously. Wider data buses generally mean higher performance. The speed of the bus is dictated by the system clock speed and is the other main driver of bus performance. The bandwidth of the data bus is how much information can flow through it, and is a function of the bus width (in bits) and its speed (in MHz). You can think of the data bus as a highway; its width is the number of lanes and its speed is how fast the cars are traveling. The bandwidth then is the amount of traffic the highway can carry in a given unit of time, which is a function of how many lanes there are and how fast the cars can drive in them. Memory bus bandwidth is extremely important in modern PCs, because it is often a main bottleneck to system performance. With processors today running so much faster than other parts of the system, increasing the speed at which data can be fed to the processor from the "outside" usually has more of an impact on overall performance than speeding up the processor itself. This is why for example, a Pentium 150 is not much faster than a Pentium 133; the P150 runs on a 60 MHz memory bus and the P133 on a 66 MHz bus. 10% more clock speed on the system bus improves overall performance much more than a 10% faster processor. Address Bus Size and Maximum RAM for Specific Processors The table below shows the address bus width and maximum system memory for various processors. 32 bit address bus and 4 GB maximum addressability are by far the most common, and since it hasn't been a limiting factor in the vast majority of cases, there hasn't been a great impetus to widen the address space. Look here to see full details of all the characteristics for any specific processor.
Processors 8088, 8086 80286, 80386SX 80386DX, 80486DX, 80486SX, 80486DX2, 80486DX4, AMD 5x86, Cyrix 5x86, Pentium, Pentium OverDrive, Pentium with MMX, Pentium with MMX OverDrive, 6x86, K5, K6, 6x86MX Pentium Pro, Pentium II Address Bus Width (bits) 20 24 Maximum System RAM 1 MB 16 MB

32

4 GB

36

Data Bus Size and Bandwidth for Specific Processors This table illustrates memory data bus size, speed and bandwidth for various processors. You can see that while in recent years raw processors speeds have increased a great deal, memory bus speeds have remained somewhat stagnant, and memory system bandwidth has been essentially unchanged since the introduction of the Pentium running on a 66 MHz system bus in 1994. Look here to see full details of all the characteristics for any specific processor. There are three different tables, reflecting the three general speed ranges used by data buses in PCs over the last 15 years. In each table the processor family is listed along with the data bus width. Then a column is provided showing the bus bandwidth in MB/sec corresponding to each of the bus speeds normally used by the processor. Remember that processor clock multipliers mean that many 486 and later CPUs run at a multiple of the system bus speed. Note: Many people (incorrectly) deduce the "size" of the processor from the width of the data bus. For example, people see the 64-bit data bus width on the Pentium and conclude that "the Pentium is a 64 bit processor". In fact, the internal register size is what determines the "size" of a processor (and based on this definition every processor introduced in the last 10 years has been 32-bit).

First and second generation processors:


Processor Width Family (bits) 8088 8086 80286 8 16 16 4.77 MHz 4.5 9.1 -6 MHz --11.4 8 MHz 7.6 15.3 15.3 10 MHz -19.1 19.1 12 MHz --22.9 16 MHz --30.5 20 MHz --38.1

Third and fourth generation processors (plus Pentium OverDrive for 486):
Processor Family 80386DX 80386SX 80486DX 80486SX 80486DX2 80486DX4 AMD 5x86 Width (bits) 32 16 32 32 32 32 32 16 MHz 63.6 31.8 -63.6 ---20 MHz 76.3 38.1 -76.3 ---25 MHz 95.4 47.7 95.4 95.4 95.4 95.4 -33 MHz 127.2 63.6 127.2 127.2 127.2 127.2 127.2 40 MHz 152.6 ---152.6 152.6 -50 MHz --190.7 -----

Cyrix 5x86 Pentium OverDrive (for 80486s)

32

--

--

--

127.2

152.6

--

32

--

--

95.4

127.2

--

--

Fifth and sixth generation processors (except Pentium OverDrive for 486):
Processor Family Pentium Pentium OverDrive (for Pentiums) Pentium with MMX Pentium with MMX OverDrive 6x86 K5 Pentium Pro Pentium II K6 6x86MX Width (bits) 64 64 50 MHz 381.5 381.5 55 MHz --60 MHz 457.8 457.8 66 MHz 508.6 508.6 75 MHz ---

64

381.5

--

457.8

508.6

--

64 64 64 64 64 64 64

381.5 381.5 381.5 -----

-419.6 ------

457.8 457.8 457.8 457.8 --457.8

508.6 508.6 508.6 508.6 508.6 508.6 508.6

-572.2 ----572.2

Note: You may be somewhat confused by the bandwidth numbers I have listed in the table above. For example, shouldn't the bandwidth of a Pentium running on a 66 MHz bus be 64/8*66.6=533.3 MB/sec? This is how most people and even companies write it, but this is not technically correct, because of the old problem of different definitions of what "M" stands for. The "M" in "MHz" is 1,000,000 (10^6), but the "M" in "MBytes/second" is 1,048,576 (2^20). So the bandwidth of the Pentium is more properly stated as 64/8*66.6*1,000,000/1,048,576=508.6 MBytes/second. Processor / Memory Address Bus The address bus is the set of lines that carry information about where in memory the data is to be transferred to or from. No actual data is carried on this bus, rather memory addresses, which control the location that data is either read from or written to, are sent here. The speed of the address bus is the same as the data bus it is matched to.

The width of the address bus controls the addressability of the processor, which is how much system memory the processor can read or write to. Continuing the highway analogy, the address bus carries the information about exit numbers for the cars to use. The wider the address bus, the more digits the exit number could have, and the more exits that could be supported on the highway. The width of the address and data buses aren't linked; you can have a highway with many lanes but few exits, or vice versa. Usually though, newer processors have both wider data and address buses. Address bus size is not something that is thought of very often, because it has no direct impact on performance. Processors usually can address far more physical memory than most people will ever use, and in fact the system chipset or motherboard factors usually place much tighter restrictions on maximum system memory than the processor does. For example, a Pentium can theoretically address 4 GB of system memory, but most normal motherboards won't take even one quarter that amount. System Clocks Every modern PC has multiple system clocks. Each of these vibrates at a specific frequency, normally measured in MHz (megahertz, or millions of cycles per second). A clock "tick" is the smallest unit of time in which processing happens, and is sometimes called a cycle; some types of work can be done in one cycle while others require many. The ticking of these clocks is what drives the various circuits in the PC, and the faster they tick, the more performance you get from your machine (other things being equal). The original PCs had a unified system clock; a single clock (running at a very low speed like 8 MHz) drove the processor, memory (there was no cache back then) and I/O bus. As PCs have advanced and different parts have gained in speed more than others, the need for multiple clocks has arisen. A typical modern PC now has either four or five different clocks, running at different (but related) speeds. When the "system clock" is referred to generically, it normally refers to the speed of the memory bus running on the motherboard (and not usually that of the processor). The various clocks in the modern PC are created using a single clock generator circuit (on the motherboard) to generate the "main" system clock, and then various clock multiplier or divider circuits to create the other signals. The table below shows the typical arrangement of clocks in a 266 MHz Pentium II PC, and how they relate to each other:
Device / Bus Clock Processor Level 2 Cache System (Memory) Bus PCI Bus ISA Bus Speed (MHz) 266 133 66 33 8.3 System Clock / 2 PCI Bus / 4 Generated As System Clock * 4 System Clock * 2 (or Processor / 2)

The entire system is tied to the speed of the system clock. This is why increasing the system clock speed is usually more important than increasing the raw processor speed; the processor spends a great deal of time waiting on information from much slower devices, especially the system buses. While a faster processor will have greater performance, this increase in speed will not lead to nearly as much performance improvement if the processor is spending a great deal of time sitting idle waiting for other, slower parts of the system. Multiprocessing Multiprocessing is running a system with more than one processor. The theory is of course that you can double performance by using two processors instead of one. And the reality of course is that it doesn't work this well, although multiprocessing can result in improved performance under certain conditions. In order to employ multiprocessing effectively, the computer system must have all of the following in place:

Motherboard Support: A motherboard capable of handling multiple processors. This means additional sockets or slots for the extra chips, and a chipset capable of handling the multiprocessing arrangement. Processor Support: Processors that are capable of being used in a multiprocessing system. Not all are, and in fact some versions of the same processor are while others are not. Operating System Support: An operating system that supports multiprocessing, such as Windows NT or one of the various flavors of UNIX.

In addition, multiprocessing is most effective when used with application software designed specifically for it. Multiprocessing is managed by the operating system, which allocates different tasks to be performed by the various processors in the system. Applications designed for use in multiprocessing are said to be threaded, which means that they are broken into smaller routines that can be run independently. This allows the operating system to let these threads run on more than one processor simultaneously, which is how multiprocessing results in improved performance. If the application isn't designed this way, then it can't take advantage of multiple processors, although the operating system can still make use of the additional processor(s) if you use more than one application at a time (multitasking). Multiprocessing can be said to be either asymmetric or symmetric. The term refers to how the operating system divides tasks between the processors in the system. Asymmetric multiprocessing designates some processors to perform system tasks only, and others to run applications only. This is a rigid design that results in lost performance during those times when the computer needs to run many system tasks and no user tasks, or vice versa. Symmetric multiprocessing, often abbreviated SMP, allows either system or user tasks to run on any processor, which is more flexible and therefore leads to better performance. SMP is what most multiprocessing PC motherboards use. In order for a processor to support multiprocessing, it must support a multiprocessing protocol, which dictates the way that the processors and chipset will talk to each other to implement SMP. Intel processors such as the Pentium and Pentium Pro, use an SMP protocol called APIC, and

Intel chipsets that support multiprocessing (such as the 430HX, 440FX and 450GX/KX) are designed to work with these chips. APIC is a proprietary standard and Intel has patents in place that prevent AMD or Cyrix from implementing APIC, which means that even though AMD and Cyrix can make Intel-compatible processors, they cannot make them work in SMP configurations on standard Intel chipset motherboards. Intel has, thus far, the SMP market basically to itself. AMD and Cyrix implement their own SMP standard, called OpenPIC, which is great except for the fact that there aren't any motherboards that implement it! It is hoped that at some point, a major chipset manufacturer will provide OpenPIC support, finally allowing SMP with AMD or Cyrix chips. Until such time, however, Intel chips are the only choice for those who want to multiprocess. In addition, the Intel Pentium Pro or Pentium II are currently the best choices for multiprocessing because each chip has its own self-contained level 2 cache. In a system with more than one processor and level 2 cache on the motherboard, the processors must share the cache. Each new processor added to the system results in less cache per processor, which degrades performance. Each Pentium Pro or Pentium II however comes with its own level 2 cache, avoiding this problem and greatly improving performance, particularly on four-processor systems. For quad multiprocessing (four CPUs) the Pentium Pro is still the only option, despite its being several years old, because it is the only CPU for which a chipset is available that will support quad multiprocessing. Future chipsets will allow four or more Pentium IIs to be used in a system.

Internal Processor Architecture and Operation


The architecture of a processor describes its internal structures and how it works. These are logical structures of course; all processors are made of semiconductor material, and it is how this is arranged that determines how the processor will work. This is similar to how it is with software: all software boils down to a long string of ones and zeros, but it is how you design and lay out those bits that determines whether the software is good or not. Processors are in some ways "black boxes". They all perform the same basic function to the outside world: they process instructions. In fact, the instructions they support, at least in the PC world, haven't changed much in the last 10 years. But on the inside, the ways they use to execute instructions have grown much more powerful and complicated. In addition to improving performance by "brute force" (increasing clock speeds) chip makers have found innovative ways to wring more performance from each clock cycle. For example, the Intel 486DX-25 has over twice the performance of the Intel 386DX-25, even though they run at the same clock speed. The improvement in this 486's power is entirely due to advancements in internal architecture. Furthermore, the architecture has an impact on how fast the processor can run. Since a faster processor means a shorter time for each clock cycle, it becomes more and more difficult to design circuitry that can work in these smaller amounts of time. Making processors run at faster clock speed necessitates changes not just to its physical characteristics but its internal logic design as well.

Processor Instruction Sets


The job of all processors is to execute instructions, which are the commands that make up the machine language that the processor understands. Most software programs are written in higherlevel languages, but they must be translated into the processor's machine language in order for the computer to run (execute) them. This is called compiling the program to machine language. See here for more on the basics of software and machine language. Taken collectively, all of the various instructions that the processor can execute are called its instruction set. The instruction set determines what sort of software can run on the processor; in order for two processors to be compatible, they must (among other things) be able to execute the same instructions. The number and type of instructions supported by the processor dictates the requirements for all software that uses it, and has a significant impact on performance as well. Instruction Set Complexity: CISC vs. RISC The primary objective of processor designers is to improve performance. Performance is defined as the amount of work that the processor can do in a given period of time. Different instructions perform different amounts of work. To increase performance, you can either have the processor execute instructions in less time, or make each instruction it executes do more work. Increasing performance by executing instructions in less time means increasing the clock speed of the processor. Making it do more work with each instruction means increasing the power and complexity of each instruction. Ideally you'd like to do both, of course, but it is a design tradeoff; it is hard to make more complex instructions run faster. A real-life analogy would be to imagine you are pedaling a bicycle. To get where you are going more quickly, you can either use a low gear and pedal very quickly, or use a high gear and push harder. You can try to push both harder and faster, but you can never pedal as fast with a high gear as you can with a low one. This tradeoff in basic instruction set design philosophy is reflected in the two main labels given to instruction sets. CISC stands for complex instruction set computer and is the name given to processors that use a large number of complicated instructions, to try to do more work with each one. RISC stands for reduced instruction set computer and is the generic name given to processors that use a small number of simple instructions, to try to do less work with each instruction but execute them much faster. The question of which of these two approaches to use in designing a processor has become one of the great arguments of the computer world. This is especially true because once a platform makes an instruction set decision, it tends to stick with it in order to ensure compatibility with existing software. Somewhat ironically, the line between RISC and CISC has become blurred in recent years, with each moving toward the middle ground in an attempt to improve performance. In addition, new ways to mixing RISC and CISC concepts have emerged through the use of translating processors.

The advent of low-cost computers on integrated circuits has transformed modern society. General-

personal computers are used for computation, text editing, multimedia display, and communication over the Internet. Many more microprocessors are part of embedded systems, providing digital control over myriad objects from appliances to automobiles to cellular phones and industrial process contro
purpose microprocessors in

Microprocessor
From Wikipedia, the free encyclopedia

Intel 4004, the first commercial microprocessor

A microprocessor incorporates the functions of a computer's central processing unit (CPU) on a single integrated circuit (IC),[1] or at most a few integrated circuits.[2] All modern CPUs are microprocessors making the micro- prefix redundant. The microprocessor is a multipurpose, programmable device that accepts digital data as input, processes it according to instructions stored in its memory, and provides results as output. It is an example of sequential digital logic, as it has internal memory. Microprocessors operate on numbers and symbols represented in the binary numeral system. The advent of low-cost computers on integrated circuits has transformed modern society. General-purpose microprocessors in personal computers are used for computation, text editing, multimedia display, and communication over the Internet. Many more microprocessors are part of embedded systems, providing digital control over myriad objects from appliances to automobiles to cellular phones and industrial process control.

Intel introduced its first 4-bit microprocessor 4004 in 1971 and its 8-bit microprocessor 8008 in 1972. During the 1960s, computer processors were constructed out of small and medium-scale ICseach containing from tens of transistors to a few hundred. These were placed and soldered onto printed circuit boards, and often multiple boards were interconnected in a chassis. The large number of discrete logic gates used more electrical powerand therefore produced more heat than a more integrated design with fewer ICs. The distance that signals had to travel between ICs on the boards limited a computer's operating speed. In the NASA Apollo space missions to the moon in the 1960s and 1970s, all onboard computations for primary guidance, navigation and control were provided by a small custom processor called "The Apollo Guidance Computer". It used wire wrap circuit boards whose only logic elements were three-input NOR gates.[3] The integration of a whole CPU onto a single chip or on a few chips greatly reduced the cost of processing power. The integrated circuit processor was produced in large numbers by highly automated processes, so unit cost was low. Single-chip processors increase reliability as there are many fewer electrical connections to fail. As microprocessor designs get faster, the cost of manufacturing a chip (with smaller components built on a semiconductor chip the same size) generally stays the same. Microprocessors integrated into one or a few large-scale ICs the architectures that had previously been implemented using many medium- and small-scale integrated circuits. Continued increases in microprocessor capacity have rendered other forms of computers almost completely obsolete (see history of computing hardware), with one or more microprocessors used in everything from the smallest embedded systems and handheld devices to the largest mainframes and supercomputers. The first microprocessors emerged in the early 1970s and were used for electronic calculators, using binary-coded decimal (BCD) arithmetic on 4-bit words. Other embedded uses of 4-bit and 8-bit microprocessors, such as terminals, printers, various kinds of automation etc., followed soon after. Affordable 8-bit microprocessors with 16-bit addressing also led to the first generalpurpose microcomputers from the mid-1970s on. Since the early 1970s, the increase in capacity of microprocessors has followed Moore's law; this originally suggested that the number of components that can be fitted onto a chip doubles every year. With present technology, it is actually every two years,[4] and as such Moore later changed the period to two years.[5]

Contents

1 Embedded applications 2 Structure 3 Firsts o 3.1 CADC o 3.2 Gilbert Hyatt o 3.3 TMS 1000

3.4 Intel 4004 3.5 Pico/General Instrument 3.6 Four-Phase Systems AL1 4 8-bit designs 5 12-bit designs 6 16-bit designs 7 32-bit designs 8 64-bit designs in personal computers 9 Multicore designs 10 RISC 11 Special-purpose designs 12 Market statistics 13 See also 14 Notes 15 References 16 External links

o o o

Embedded applications
Thousands of items that were traditionally not computer-related include microprocessors. These include large and small household appliances, cars (and their accessory equipment units), car keys, tools and test instruments, toys, light switches/dimmers and electrical circuit breakers, smoke alarms, battery packs, and hi-fi audio/visual components (from DVD players to phonograph turntables). Such products as cellular telephones, DVD video system and HDTV broadcast systems fundamentally require consumer devices with powerful, low-cost, microprocessors. Increasingly stringent pollution control standards effectively require automobile manufacturers to use microprocessor engine management systems, to allow optimal control of emissions over widely varying operating conditions of an automobile. Nonprogrammable controls would require complex, bulky, or costly implementation to achieve the results possible with a microprocessor. A microprocessor control program (embedded software) can be easily tailored to different needs of a product line, allowing upgrades in performance with minimal redesign of the product. Different features can be implemented in different models of a product line at negligible production cost. Microprocessor control of a system can provide control strategies that would be impractical to implement using electromechanical controls or purpose-built electronic controls. For example, an engine control system in an automobile can adjust ignition timing based on engine speed, load on the engine, ambient temperature, and any observed tendency for knockingallowing an automobile to operate on a range of fuel grades.

Structure

A block diagram of the internal architecture of the Z80 microprocessor, showing the arithmetic and logic section, register file, control logic section, and buffers to external address and data lines

The internal arrangement of a microprocessor varies depending on the age of the design and the intended purposes of the processor. The complexity of an integrated circuit is bounded by physical limitations of the number of transistors that can be put onto one chip, the number of package terminations that can connect the processor to other parts of the system, the number of interconnections it is possible to make on the chip, and the heat that the chip can dissipate. Advancing technology makes more complex and powerful chips feasible to manufacture. A minimal hypothetical microprocessor might only include an arithmetic logic unit (ALU) and a control logic section. The ALU performs operations such as addition, subtraction, and operations such as AND or OR. Each operation of the ALU sets one or more flags in a status register, which indicate the results of the last operation (zero value, negative number, overflow, or others). The logic section retrieves instruction operation codes from memory, and initiates whatever sequence of operations of the ALU requires to carry out the instruction. A single operation code might affect many individual data paths, registers, and other elements of the processor. As integrated circuit technology advanced, it was feasible to manufacture more and more complex processors on a single chip. The size of data objects became larger; allowing more transistors on a chip allowed word sizes to increase from 4- and 8-bit words up to today's 64-bit words. Additional features were added to the processor architecture; more on-chip registers sped up programs, and complex instructions could be used to make more compact programs. Floatingpoint arithmetic, for example, was often not available on 8-bit microprocessors, but had to be carried out in software. Integration of the floating point unit first as a separate integrated circuit and then as part of the same microprocessor chip, sped up floating point calculations. Occasionally, physical limitations of integrated circuits made such practices as a bit slice approach necessary. Instead of processing all of a long word on one integrated circuit, multiple circuits in parallel processed subsets of each data word. While this required extra logic to handle, for example, carry and overflow within each slice, the result was a system that could handle, say, 32-bit words using integrated circuits with a capacity for only 4 bits each. With the ability to put large numbers of transistors on one chip, it becomes feasible to integrate memory on the same die as the processor. This CPU cache has the advantage of faster access than off-chip memory, and increases the processing speed of the system for many applications.

Generally, processor speed has increased more rapidly than external memory speed, so cache memory is necessary if the processor is not delayed by slower external memory.

Firsts
Three projects delivered a microprocessor at about the same time: Garrett AiResearch's Central Air Data Computer (CAD8), Texas Instruments (TI) TMS 1000 (1971 September), and Intel's 4004 (1971 November).
CADC This section relies on references to primary sources. Please add references to secondary or tertiary sources. (March 2010) For more details on this topic, see Central Air Data Computer.

In 1968, Garrett AiResearch (which employed designers Ray Holt and Steve Geller) was invited to produce a digital computer to compete with electromechanical systems then under development for the main flight control computer in the US Navy's new F-14 Tomcat fighter. The design was complete by 1970, and used a MOS-based chipset as the core CPU. The design was significantly (approximately 20 times) smaller and much more reliable than the mechanical systems it competed against, and was used in all of the early Tomcat models. This system contained "a 20-bit, pipelined, parallel multi-microprocessor". The Navy refused to allow publication of the design until 1997. For this reason the CADC, and the MP944 chipset it used, are fairly unknown.[6] Ray Holt graduated California Polytechnic University in 1968, and began his computer design career with the CADC. From its inception, it was shrouded in secrecy until 1998 when at Holt's request, the US Navy allowed the documents into the public domain. Since then people[who?] have debated whether this was the first microprocessor. Holt has stated that no one has compared this microprocessor with those that came later.[7] According to Parab et al. (2007), "The scientific papers and literature published around 1971 reveal that the MP944 digital processor used for the F-14 Tomcat aircraft of the US Navy qualifies as the first microprocessor. Although interesting, it was not a single-chip processor, as was not the Intel 4004 they both were more like a set of parallel building blocks you could use to make a general-purpose form. It contains a CPU, RAM, ROM, and two other support chips like the Intel 4004. Interestingly, it was made from the same P-channel technology, operated at military specifications and had larger chips -- an excellent computer engineering design by any standards. Its design indicates a major advance over Intel, and two year earlier. It actually worked and was flying in the F-14 when the Intel 4004 was announced. It indicates that todays industry theme of converging DSP-microcontroller architectures was started in 1971."[8] This convergence of DSP and microcontroller architectures is known as a digital signal controller.[citation needed]

Gilbert Hyatt

Gilbert Hyatt was awarded a patent claiming an invention pre-dating both TI and Intel, describing a "microcontroller".[9] The patent was later invalidated, but not before substantial royalties were paid out.[10][11]
TMS 1000

The Smithsonian Institution says TI engineers Gary Boone and Michael Cochran succeeded in creating the first microcontroller (also called a microcomputer) and the first single-chip CPU in 1971. The result of their work was the TMS 1000, which went commercial in 1974.[12] TI stressed the 4-bit TMS 1000 for use in pre-programmed embedded applications, introducing a version called the TMS1802NC on September 17, 1971 that implemented a calculator on a chip. TI filed for a patent on the microprocessor. Gary Boone was awarded U.S. Patent 3,757,306 for the single-chip microprocessor architecture on September 4, 1973. In 1971 and again in 1976, Intel and TI entered into broad patent cross-licensing agreements, with Intel paying royalties to TI for the microprocessor patent. A history of these events is contained in court documentation from a legal dispute between Cyrix and Intel, with TI as intervenor and owner of the microprocessor patent. A computer-on-a-chip combines the microprocessor core (CPU), memory, and I/O (input/output) lines onto one chip. The computer-on-a-chip patent, called the "microcomputer patent" at the time, U.S. Patent 4,074,351, was awarded to Gary Boone and Michael J. Cochran of TI. Aside from this patent, the standard meaning of microcomputer is a computer using one or more microprocessors as its CPU(s), while the concept defined in the patent is more akin to a microcontroller.
Intel 4004

The 4004 with cover removed (left) and as actually used (right) Main article: Intel 4004

The Intel 4004 is generally regarded as the first commercially available microprocessor,[13][14] and cost $60.[15] The first known advertisement for the 4004 is dated November 15, 1971 and appeared in Electronic News.[16] The project that produced the 4004 originated in 1969, when Busicom, a Japanese calculator manufacturer, asked Intel to build a chipset for high-performance desktop calculators. Busicom's original design called for a programmable chip set consisting of seven different chips. Three of the chips were to make a special-purpose CPU with its program

stored in ROM and its data stored in shift register read-write memory. Ted Hoff, the Intel engineer assigned to evaluate the project, believed the Busicom design could be simplified by using dynamic RAM storage for data, rather than shift register memory, and a more traditional general-purpose CPU architecture. Hoff came up with a fourchip architectural proposal: a ROM chip for storing the programs, a dynamic RAM chip for storing data, a simple I/O device and a 4bit central processing unit (CPU). Although not a chip designer, he felt the CPU could be integrated into a single chip, but as he lacked the technical know-how the idea remained just a wish for the time being. While the architecture and specifications of the MCS-4 came from the interaction of Hoff with Stanley Mazor, a software engineer reporting to him, and with Busicom engineer Masatoshi Shima, during 1969, Mazor and Hoff moved on to other projects. In April 1970, Intel hired Italian-born engineer Federico Faggin as project leader, a move that ultimately made the singlechip CPU final design a reality (Shima meanwhile designed the Busicom calculator firmware and assisted Faggin during the first six months of the implementation). Faggin, who originally developed the silicon gate technology (SGT) in 1968 at Fairchild Semiconductor[17] and designed the worlds first commercial integrated circuit using SGT, the Fairchild 3708, had the correct background to lead the project into what would become the first commercial general purpose microprocessor. Since SGT was his very own invention, it in addition to his new methodology for random logic design made it possible to implement a single-chip CPU with the proper speed, power dissipation and cost. The manager of Intel's MOS Design Department was Leslie L. Vadsz at the time of the MCS-4 development, but Vadasz's attention was completely focused on the mainstream business of semiconductor memories and he left the leadership and the management of the MCS-4 project to Faggin, who was ultimately responsible for leading the 4004 project to its realization. Production units of the 4004 were first delivered to Busicom in March 1971 and shipped to other customers in late 1971.[citation needed]
Pico/General Instrument

The PICO1/GI250 chip introduced in 1971. This was designed by Pico Electronics (Glenrothes, Scotland) and manufactured by General Instrument of Hicksville NY

In 1971 Pico Electronics[18] and General Instrument (GI) introduced their first collaboration in ICs, a complete single chip calculator IC for the Monroe/Litton Royal Digital III calculator. This chip could also arguably lay claim to be one of the first microprocessors or microcontrollers having ROM, RAM and a RISC instruction set on-chip. The layout for the four layers of the PMOS process was hand drawn at x500 scale on mylar film, a significant task at the time given the complexity of the chip. Pico was a spinout by five GI design engineers whose vision was to create single chip calculator ICs. They had significant previous design experience on multiple calculator chipsets with both GI and Marconi-Elliott.[19] The key team members had originally been tasked by Elliott Automation to create an 8 bit computer in MOS and had helped establish a MOS Research Laboratory in Glenrothes, Scotland in 1967. Calculators were becoming the largest single market for semiconductors and Pico and GI went on to have significant success in this burgeoning market. GI continued to innovate in microprocessors and microcontrollers with products including the CP1600, IOB1680 and PIC1650.[20] In 1987 the GI Microelectronics business was spun out into the microchip PIC microcontroller business.
Four-Phase Systems AL1

The Four-Phase Systems AL1 was an 8-bit bit slice chip containing eight registers and an ALU.[21] It was designed by Lee Boysel in 1969.[22][23][24] At the time, it formed part of a ninechip, 24-bit CPU with three AL1s, but it was later called a microprocessor when, in response to 1990s litigation by Texas Instruments, a demonstration system was constructed where a single AL1 formed part of a courtroom demonstration computer system, together with RAM, ROM, and an input-output device.[25]

8-bit designs
This section and the sections below needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (June 2011)

The Intel 4004 was followed in 1972 by the Intel 8008, the world's first 8-bit microprocessor. The 8008 was not, however, an extension of the 4004 design, but instead the culmination of a separate design project at Intel, arising from a contract with Computer Terminals Corporation, of San Antonio TX, for a chip for a terminal they were designing,[26] the Datapoint 2200 fundamental aspects of the design came not from Intel but from CTC. In 1968, CTC's Vic Poor and Harry Pyle developed the original design for the instruction set and operation of the processor. In 1969, CTC contracted two companies, Intel and Texas Instruments, to make a single-chip implementation, known as the CTC 1201.[27] In late 1970 or early 1971, TI dropped out being unable to make a reliable part. In 1970, with Intel yet to deliver the part, CTC opted to use their own implementation in the Datapoint 2200, using traditional TTL logic instead (thus

the first machine to run 8008 code was not in fact a microprocessor at all and was delivered a year earlier). Intel's version of the 1201 microprocessor arrived in late 1971, but was too late, slow, and required a number of additional support chips. CTC had no interest in using it. CTC had originally contracted Intel for the chip, and would have owed them $50,000 for their design work.[27] To avoid paying for a chip they did not want (and could not use), CTC released Intel from their contract and allowed them free use of the design.[27] Intel marketed it as the 8008 in April, 1972, as the world's first 8-bit microprocessor. It was the basis for the famous "Mark-8" computer kit advertised in the magazine Radio-Electronics in 1974. The 8008 was the precursor to the very successful Intel 8080 (1974), which offered much improved performance over the 8008 and required fewer support chips, Zilog Z80 (1976), and derivative Intel 8-bit processors. The competing Motorola 6800 was released August 1974 and the similar MOS Technology 6502 in 1975 (both designed largely by the same people). The 6502 family rivaled the Z80 in popularity during the 1980s. A low overall cost, small packaging, simple computer bus requirements, and sometimes the integration of extra circuitry (e.g. the Z80's built-in memory refresh circuitry) allowed the home computer "revolution" to accelerate sharply in the early 1980s. This delivered such inexpensive machines as the Sinclair ZX-81, which sold for US$99. A variation of the 6502, the MOS Technology 6510 was used in the Commodore 64 and yet another variant, the 8502, powered the Commodore 128. The Western Design Center, Inc (WDC) introduced the CMOS 65C02 in 1982 and licensed the design to several firms. It was used as the CPU in the Apple IIe and IIc personal computers as well as in medical implantable grade pacemakers and defibrillators, automotive, industrial and consumer devices. WDC pioneered the licensing of microprocessor designs, later followed by ARM and other microprocessor intellectual property (IP) providers in the 1990s. Motorola introduced the MC6809 in 1978, an ambitious and well thought-through 8-bit design which was source compatible with the 6800 and was implemented using purely hard-wired logic. (Subsequent 16-bit microprocessors typically used microcode to some extent, as CISC design requirements were getting too complex for pure hard-wired logic.) Another early 8-bit microprocessor was the Signetics 2650, which enjoyed a brief surge of interest due to its innovative and powerful instruction set architecture. A seminal microprocessor in the world of spaceflight was RCA's RCA 1802 (aka CDP1802, RCA COSMAC) (introduced in 1976), which was used on board the Galileo probe to Jupiter (launched 1989, arrived 1995). RCA COSMAC was the first to implement CMOS technology. The CDP1802 was used because it could be run at very low power, and because a variant was available fabricated using a special production process, silicon on sapphire (SOS), which provided much better protection against cosmic radiation and electrostatic discharge than that of any other processor of the era. Thus, the SOS version of the 1802 was said to be the first radiation-hardened microprocessor.

The RCA 1802 had what is called a static design, meaning that the clock frequency could be made arbitrarily low, even to 0 Hz, a total stop condition. This let the Galileo spacecraft use minimum electric power for long uneventful stretches of a voyage. Timers or sensors would awaken the processor in time for important tasks, such as navigation updates, attitude control, data acquisition, and radio communication. Current versions of the Western Design Center 65C02 and 65C816 have static cores, and thus retain data even when the clock is completely halted.

12-bit designs
The Intersil 6100 family consisted of a 12-bit microprocessor (the 6100) and a range of peripheral support and memory ICs. The microprocessor recognised the DEC PDP-8 minicomputer instruction set. As such it was sometimes referred to as the CMOS-PDP8. Since it was also produced by Harris Corporation, it was also known as the Harris HM-6100. By virtue of its CMOS technology and associated benefits, the 6100 was being incorporated into some military designs until the early 1980s.

16-bit designs
The first multi-chip 16-bit microprocessor was the National Semiconductor IMP-16, introduced in early 1973. An 8-bit version of the chipset was introduced in 1974 as the IMP-8. Other early multi-chip 16-bit microprocessors include one that Digital Equipment Corporation (DEC) used in the LSI-11 OEM board set and the packaged PDP 11/03 minicomputerand the Fairchild Semiconductor MicroFlame 9440, both introduced in 19751976. In 1975, National introduced the first 16-bit single-chip microprocessor, the National Semiconductor PACE, which was later followed by an NMOS version, the INS8900. Another early single-chip 16-bit microprocessor was TI's TMS 9900, which was also compatible with their TI-990 line of minicomputers. The 9900 was used in the TI 990/4 minicomputer, the TI-99/4A home computer, and the TM990 line of OEM microcomputer boards. The chip was packaged in a large ceramic 64-pin DIP package, while most 8-bit microprocessors such as the Intel 8080 used the more common, smaller, and less expensive plastic 40-pin DIP. A follow-on chip, the TMS 9980, was designed to compete with the Intel 8080, had the full TI 990 16-bit instruction set, used a plastic 40-pin package, moved data 8 bits at a time, but could only address 16 KB. A third chip, the TMS 9995, was a new design. The family later expanded to include the 99105 and 99110. The Western Design Center (WDC) introduced the CMOS 65816 16-bit upgrade of the WDC CMOS 65C02 in 1984. The 65816 16-bit microprocessor was the core of the Apple IIgs and later the Super Nintendo Entertainment System, making it one of the most popular 16-bit designs of all time. Intel "upsized" their 8080 design into the 16-bit Intel 8086, the first member of the x86 family, which powers most modern PC type computers. Intel introduced the 8086 as a cost-effective way

of porting software from the 8080 lines, and succeeded in winning much business on that premise. The 8088, a version of the 8086 that used an 8-bit external data bus, was the microprocessor in the first IBM PC. Intel then released the 80186 and 80188, the 80286 and, in 1985, the 32-bit 80386, cementing their PC market dominance with the processor family's backwards compatibility. The 80186 and 80188 were essentially versions of the 8086 and 8088, enhanced with some onboard peripherals and a few new instructions; they were not used in IBMcompatible PCs because the built-in perpherals and their locations in the memory map were incompatible with the IBM design. The 8086 and successors had an innovative but limited method of memory segmentation, while the 80286 introduced a full-featured segmented memory management unit (MMU). The 80386 introduced a flat 32-bit memory model with paged memory management. The Intel x86 processors up to and including the 80386 do not include floating-point units (FPUs). Intel introduced the 8087, 80287, and 80387 math coprocessors to add hardware floating-point and transcendental function capabilities to the 8086 through 80386 CPUs. The 8087 works with the 8086/8088 and 80186/80188,[28] the 80187 works with the 80186/80188, the 80287 works with the 80286 and 80386,[29] and the 80387 works with the 80386 (yielding better performance than the 80287). The combination of an x86 CPU and an x87 coprocessor forms a single multi-chip microprocessor; the two chips are programmed as a unit using a single integrated instruction set.[30] Though the 8087 coprocessor is interfaced to the CPU through I/O ports in the CPU's address space, this is transparent to the program, which does not need to know about or access these I/O ports directly; the program accesses the coprocessor and its registers through normal instruction opcodes. Starting with the successor to the 80386, the 80486, the FPU was integrated with the control unit, MMU, and integer ALU in a pipelined design on a single chip (in the 80486DX version), or the FPU was eliminated entirely (in the 80486SX version). An ostensible coprocessor for the 80486SX, the 80487 was actually a complete 80486DX that disabled and replaced the coprocessorless 80486SX that it was installed to upgrade.

32-bit designs

Upper interconnect layers on an Intel 80486DX2 die

16-bit designs had only been on the market briefly when 32-bit implementations started to appear. The most significant of the 32-bit designs is the Motorola MC68000, introduced in 1979. The 68k, as it was widely known, had 32-bit registers in its programming model but used 16-bit internal data paths, 3 16-bit Arithmetic Logic Units, and a 16-bit external data bus (to reduce pin count), and externally supported only 24-bit addresses (internally it worked with full 32 bit addresses). In PC-based IBM-compatible mainframes the MC68000 internal microcode was modified to emulate the 32-bit System/370 IBM mainframe.[31] Motorola generally described it as a 16-bit processor, though it clearly has 32-bit capable architecture. The combination of high performance, large (16 megabytes or 224 bytes) memory space and fairly low cost made it the most popular CPU design of its class. The Apple Lisa and Macintosh designs made use of the 68000, as did a host of other designs in the mid-1980s, including the Atari ST and Commodore Amiga. The world's first single-chip fully 32-bit microprocessor, with 32-bit data paths, 32-bit buses, and 32-bit addresses, was the AT&T Bell Labs BELLMAC-32A, with first samples in 1980, and general production in 1982.[32][33] After the divestiture of AT&T in 1984, it was renamed the WE 32000 (WE for Western Electric), and had two follow-on generations, the WE 32100 and WE 32200. These microprocessors were used in the AT&T 3B5 and 3B15 minicomputers; in the 3B2, the world's first desktop supermicrocomputer; in the "Companion", the world's first 32-bit laptop computer; and in "Alexander", the world's first book-sized supermicrocomputer, featuring ROM-pack memory cartridges similar to today's gaming consoles. All these systems ran the UNIX System V operating system. The first commercial, single chip, fully 32-bit microprocessor available on the market was the HP FOCUS. Intel's first 32-bit microprocessor was the iAPX 432, which was introduced in 1981 but was not a commercial success. It had an advanced capability-based object-oriented architecture, but poor performance compared to contemporary architectures such as Intel's own 80286 (introduced 1982), which was almost four times as fast on typical benchmark tests. However, the results for the iAPX432 was partly due to a rushed and therefore suboptimal Ada compiler.[citation needed] The ARM first appeared in 1985. This is a RISC processor design, which has since come to dominate the 32-bit embedded systems processor space due in large part to its power efficiency, its licensing model, and its wide selection of system development tools. Semiconductor manufacturers generally license cores and integrate them into their own system on a chip products; only a few such vendors are licensed to modify the ARM cores. Most cell phones include an ARM processor, as do a wide variety of other products. There are microcontrolleroriented ARM cores without virtual memory support, as well as SMP applications processors with virtual memory.

Motorola's success with the 68000 led to the MC68010, which added virtual memory support. The MC68020, introduced in 1985 added full 32-bit data and address buses. The 68020 became hugely popular in the Unix supermicrocomputer market, and many small companies (e.g., Altos, Charles River Data Systems, Cromemco) produced desktop-size systems. The MC68030 was introduced next, improving upon the previous design by integrating the MMU into the chip. The continued success led to the MC68040, which included an FPU for better math performance. A 68050 failed to achieve its performance goals and was not released, and the follow-up MC68060 was released into a market saturated by much faster RISC designs. The 68k family faded from the desktop in the early 1990s. Other large companies designed the 68020 and follow-ons into embedded equipment. At one point, there were more 68020s in embedded equipment than there were Intel Pentiums in PCs.[34] The ColdFire processor cores are derivatives of the venerable 68020. During this time (early to mid-1980s), National Semiconductor introduced a very similar 16-bit pinout, 32-bit internal microprocessor called the NS 16032 (later renamed 32016), the full 32-bit version named the NS 32032. Later, National Semiconductor produced the NS 32132, which allowed two CPUs to reside on the same memory bus with built in arbitration. The NS32016/32 outperformed the MC68000/10, but the NS32332which arrived at approximately the same time as the MC68020did not have enough performance. The third generation chip, the NS32532, was different. It had about double the performance of the MC68030, which was released around the same time. The appearance of RISC processors like the AM29000 and MC88000 (now both dead) influenced the architecture of the final core, the NS32764. Technically advancedwith a superscalar RISC core, 64-bit bus, and internally overclockedit could still execute Series 32000 instructions through real-time translation. When National Semiconductor decided to leave the Unix market, the chip was redesigned into the Swordfish Embedded processor with a set of on chip peripherals. The chip turned out to be too expensive for the laser printer market and was killed. The design team went to Intel and there designed the Pentium processor, which is very similar to the NS32764 core internally. The big success of the Series 32000 was in the laser printer market, where the NS32CG16 with microcoded BitBlt instructions had very good price/performance and was adopted by large companies like Canon. By the mid-1980s, Sequent introduced the first symmetric multiprocessor (SMP) server-class computer using the NS 32032. This was one of the design's few wins, and it disappeared in the late 1980s. The MIPS R2000 (1984) and R3000 (1989) were highly successful 32-bit RISC microprocessors. They were used in high-end workstations and servers by SGI, among others. Other designs included the interesting Zilog Z80000, which arrived too late to market to stand a chance and disappeared quickly. In the late 1980s, "microprocessor wars" started killing off some of the microprocessors.[citation needed] Apparently[vague], with only one bigger design win, Sequent, the NS 32032 just faded out of existence, and Sequent switched to Intel microprocessors.[citation needed] From 1985 to 2003, the 32-bit x86 architectures became increasingly dominant in desktop, laptop, and server markets, and these microprocessors became faster and more capable. Intel had licensed early versions of the architecture to other companies, but declined to license the

Pentium, so AMD and Cyrix built later versions of the architecture based on their own designs. During this span, these processors increased in complexity (transistor count) and capability (instructions/second) by at least three orders of magnitude. Intel's Pentium line is probably the most famous and recognizable 32-bit processor model, at least with the public at broad.

64-bit designs in personal computers


While 64-bit microprocessor designs have been in use in several markets since the early 1990s (including the Nintendo 64 gaming console in 1996), the early 2000s saw the introduction of 64bit microprocessors targeted at the PC market. With AMD's introduction of a 64-bit architecture backwards-compatible with x86, x86-64 (also called AMD64), in September 2003, followed by Intel's near fully compatible 64-bit extensions (first called IA-32e or EM64T, later renamed Intel 64), the 64-bit desktop era began. Both versions can run 32-bit legacy applications without any performance penalty as well as new 64bit software. With operating systems Windows XP x64, Windows Vista x64, Windows 7 x64, Linux, BSD, and Mac OS X that run 64-bit native, the software is also geared to fully utilize the capabilities of such processors. The move to 64 bits is more than just an increase in register size from the IA-32 as it also doubles the number of general-purpose registers. The move to 64 bits by PowerPC processors had been intended since the processors' design in the early 90s and was not a major cause of incompatibility. Existing integer registers are extended as are all related data pathways, but, as was the case with IA-32, both floating point and vector units had been operating at or above 64 bits for several years. Unlike what happened when IA-32 was extended to x86-64, no new general purpose registers were added in 64-bit PowerPC, so any performance gained when using the 64-bit mode for applications making no use of the larger address space is minimal.[citation needed]

Multicore designs
Main article: Multi-core (computing) This section and the sections below needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (March 2012)

A different approach to improving a computer's performance is to add extra processors, as in symmetric multiprocessing designs, which have been popular in servers and workstations since the early 1990s. Keeping up with Moore's Law is becoming increasingly challenging as chipmaking technologies approach their physical limits. In response, microprocessor manufacturers look for other ways to improve performance so they can maintain the momentum of constant upgrades.

A multi-core processor is simply a single chip that contains more than one microprocessor core. This effectively multiplies the processor's potential performance by the number of cores (as long as the operating system and software is designed to take advantage of more than one processor core). Some components, such as bus interface and cache, may be shared between cores. Because the cores are physically very close to each other, they can communicate with each other much faster than separate processors in a multiprocessor system, which improves overall system performance. In 2005, the first personal computer dual-core processors were announced. As of 2012, dual-core and quad-core processors are widely used in home PCs and laptops while quad, six, eight, ten, twelve, and sixteen-core processors are common in the professional and enterprise markets with workstations and servers. Sun Microsystems has released the Niagara and Niagara 2 chips, both of which feature an eightcore design. The Niagara 2 supports more threads and operates at 1.6 GHz. High-end Intel Xeon processors that are on the LGA 771, LGA1336, and LGA 2011 sockets and high-end AMD Opteron processors that are on the C32 and G34 sockets are DP (dual processor) capable, as well as the older Intel Core 2 Extreme QX9775 also used in an older Mac Pro by Apple and the Intel Skulltrail motherboard. AMD's G34 motherboards can support up to four CPUs and Intel's LGA 1567 motherboards can support up to eight CPUs. The modern desktop sockets do not support systems with multiple CPUs but very few applications outside of the professional market can make good use of more than four cores and both Intel and AMD currently offer fast quad- and six-core desktop CPUs so this is generally a moot point anyway. AMD also offers the first and still currently the only eight core desktop CPUs with the FX-8xxx line but anything with more than four cores is generally not very useful in home desktops. As of 2012 January 24, these FX processors are generally inferior to similarly priced and sometimes cheaper Intel quad-core Sandy Bridge models. The desktop market has been in a transition towards quad-core CPUs since Intel's Core 2 Quads were released and now are quite common although dual-core CPUs are still more prevalent. This is largely because of people using older or mobile computers, both of which have a much lower chance of having more than two cores than newer desktops and because of how most computer users are not heavy users. AMD offers CPUs with more cores for a given amount of money than similarly priced Intel CPUsbut the AMD cores are somewhat slower, so the two trade blows in different applications depending on how well-threaded the programs running are. For example, Intel's cheapest Sandy Bridge quad-core CPUs often cost almost twice as much as AMD's cheapest Athlon II, Phenom II, and FX quad-core CPUs but Intel has dual-core CPUs in the same price ranges as AMD's cheaper quad core CPUs. In an application that uses one or two threads, the Intel dual cores outperform AMD's similarly priced quad-core CPUsand if a program supports three or four threads the cheap AMD quad-core CPUs outperform the similarly priced Intel dual-core CPUs.

Historically, AMD and Intel have switched places as the company with the fastest CPU several times. Intel currently leads on the desktop side of the computer CPU market, with their Sandy Bridge and Ivy Bridge series. In servers, AMD's new Opterons seem to have superior performance for their price point. This means that AMD are currently more competitive in lowto mid-end servers and workstations that more effectively use fewer cores and threads.

RISC
Main article: Reduced instruction set computing

In the mid-1980s to early 1990s, a crop of new high-performance reduced instruction set computer (RISC) microprocessors appeared, influenced by discrete RISC-like CPU designs such as the IBM 801 and others. RISC microprocessors were initially used in special-purpose machines and Unix workstations, but then gained wide acceptance in other roles. In 1986, HP released its first system with a PA-RISC CPU. The first commercial RISC microprocessor design was released either by MIPS Computer Systems, the 32-bit R2000 (the R1000 was not released) or by Acorn computers, the 32-bit ARM2 in 1987.[citation needed] The R3000 made the design truly practical, and the R4000 introduced the world's first commercially available 64-bit RISC microprocessor. Competing projects would result in the IBM POWER and Sun SPARC architectures. Soon every major vendor was releasing a RISC design, including the AT&T CRISP, AMD 29000, Intel i860 and Intel i960, Motorola 88000, DEC Alpha. In the late 1990s, only two 64-bit RISC architectures were still produced in volume for nonembedded applications: SPARC and Power ISA, but as ARM has become increasingly powerful, in the early 2010s, it became the third RISC architecture in the general computing segment.

Special-purpose designs
A microprocessor is a general purpose system. Several specialized processing devices have followed from the technology. Microcontrollers integrate a microprocessor with peripheral devices in embedded systems. A digital signal processor (DSP) is specialized for signal processing. Graphics processing units may have no, limited, or general programming facilities. For example, GPUs through the 1990s were mostly non-programmable and have only recently gained limited facilities like programmable vertex shaders.

Market statistics
In 2003, about US$44 billion worth of microprocessors were manufactured and sold.[35] Although about half of that money was spent on CPUs used in desktop or laptop personal computers, those count for only about 2% of all CPUs sold.[36] About 55% of all CPUs sold in the world are 8-bit microcontrollers, over two billion of which were sold in 1997.[37]

As of 2002, less than 10% of all the CPUs sold in the world are 32-bit or more. Of all the 32-bit CPUs sold, about 2% are used in desktop or laptop personal computers. Most microprocessors are used in embedded control applications such as household appliances, automobiles, and computer peripherals. Taken as a whole, the average price for a microprocessor, microcontroller, or DSP is just over $6.[36] About ten billion CPUs were manufactured in 2008. About 98% of new CPUs produced each year are embedded.[38]

See also

Arithmetic logic unit Central processing unit Comparison of CPU architectures Computer architecture Computer engineering CPU design Floating point unit Instruction set List of instruction sets List of microprocessors Microarchitecture Microcode Microprocessor chronology

Notes
1. Jump up ^ Osborne, Adam (1980). An Introduction to Microcomputers. Volume 1: Basic Concepts (2nd ed.). Berkely, California: Osborne-McGraw Hill. ISBN 0-931988-34-9. 2. Jump up ^ Krishna Kant Microprocessors And Microcontrollers: Architecture Programming And System DesignPHI Learning Pvt. Ltd., 2007 ISBN 81-203-3191-5 page 61, describing the iAPX 432 3. Jump up ^ Back to the Moon: The Verification of a Small Microprocessor's Logic Design - NASA Office of Logic Design 4. Jump up ^ Moore, Gordon (19 April 1965). "Cramming more components onto integrated circuits" (PDF). Electronics 38 (8). Retrieved 2009-12-23. 5. Jump up ^ Excerpts from A Conversation with Gordon Moore: Moore's Law (PDF). Intel. 2005. Retrieved 2009-12-23. 6. Jump up ^ Holt, Ray M. "Worlds First Microprocessor Chip Set". Ray M. Holt website. Archived from the original on 2010-07-25. Retrieved 2010-07-25. 7. Jump up ^ Holt, Ray (27 September 2001). Lecture: Microprocessor Design and Development for the US Navy F14 FighterJet (Speech). Room 8220, Wean Hall, Carnegie Mellon University, Pittsburgh, PA, US. Retrieved 2010-07-25. 8. Jump up ^ Parab, Jivan S.; Shelake, Vinod G.; Kamat, Rajanish K.; Naik, Gourish M. (2007). Exploring C for Microcontrollers: A Hands on Approach (PDF). Springer. p. 4. ISBN 978-1-40206067-0. Retrieved 2010-07-25.

9. Jump up ^ Hyatt, Gilbert P., "Single chip integrated circuit computer architecture", Patent 4942516, issued July 17, 1990 10. Jump up ^ "The Gilbert Hyatt Patent". intel4004.com. Federico Faggin. Retrieved 2009-12-23. 11. Jump up ^ Crouch, Dennis (1 July 2007). "Written Description: CAFC Finds Prima Facie Rejection(Hyatt v. Dudas (Fed. Cir. 2007)". Patently-O blog. Retrieved 2009-12-23. 12. Jump up ^ Augarten, Stan (1983). "The Most Widely Used Computer on a Chip: The TMS 1000". State of the Art: A Photographic History of the Integrated Circuit (New Haven and New York: Ticknor & Fields). ISBN 0-89919-195-9. Retrieved 2009-12-23. 13. Jump up ^ Mack, Pamela E. (30 November 2005). "The Microcomputer Revolution". Retrieved 2009-12-23. 14. Jump up ^ History in the Computing Curriculum (PDF). Retrieved 2009-12-23. 15. Jump up ^ Peter Bright "The 40th birthday ofmaybethe first microprocessor, the Intel 4004", arstechnica, Nov. 15, 2011 | url=http://arstechnica.com/business/2011/11/the-40thbirthday-ofmaybethe-first-microprocessor/ 16. Jump up ^ Faggin, Federico; Hoff, Marcian E., Jr.; Mazor, Stanley; Shima, Masatoshi (December 1996). "The History of the 4004". IEEE Micro 16 (6): 1020. doi:10.1109/40.546561. 17. Jump up ^ Faggin, F.; Klein, T.; L. (23 October 1968). "Insulated Gate Field Effect Transistor Integrated Circuits with Silicon Gates" (JPEG image). International Electronic Devices Meeting. IEEE Electron Devices Group. Retrieved 2009-12-23. 18. Jump up ^ McGonigal, James (20 September 2006). "Microprocessor History: Foundations in Glenrothes, Scotland". McGonigal personal website accessdate=2009-12-23. 19. Jump up ^ Tout, Nigel. "ANITA at its Zenith". Bell Punch Company and the ANITA calculators. Retrieved 2010-07-25. 20. Jump up ^ 16 Bit Microprocessor Handbook by Gerry Kane, Adam Osborne ISBN 0-07-931043-5 (0-07-931043-5) 21. Jump up ^ Basset, Ross (2003). "When is a Microprocessor not a Microprocessor? The Industrial Construction of Semiconductor Innovation". In Finn, Bernard. Exposing Electronics. Michigan State University Press. p. 121. ISBN 0-87013-658-5. 22. Jump up ^ "1971 - Microprocessor Integrates CPU Function onto a Single Chip". The Silicon Engine. Computer History Museum. Retrieved 2010-07-25. 23. Jump up ^ Shaller, Robert R. (15 April 2004). "Dissertation: Technological Innovation in the Semiconductor Industry: A Case Study of the International Technology Roadmap for Semiconductors" (PDF). George Mason University. Archived from the original on 2006-12-19. Retrieved 2010-07-25. 24. Jump up ^ RW (3 March 1995). "Interview with Gordon E. Moore". LAIR History of Science and Technology Collections. Los Altos Hills, California: Stanford University. 25. Jump up ^ Bassett 2003. pp. 115, 122. 26. Jump up ^ Ceruzzi, Paul E. (May 2003). A History of Modern Computing (2nd ed.). MIT Press. pp. 220221. ISBN 0-262-53203-4. 27. ^ Jump up to: a b c Wood, Lamont (August 2008). "Forgotten history: the true origins of the PC". Computerworld. Archived from the original on 2011-01-07. Retrieved 2011-01-07. 28. Jump up ^ Intel 8087 datasheet, pg. 1 29. Jump up ^ Intel387 DX MATH COPROCESSOR datasheet, pg. 1, March 1992 (Order Number: 240448-005) 30. Jump up ^ "Essentially, the 80C187 can be treated as an additional resource or an extension to the CPU. The 80C186 CPU together with an 80C187 can be used as a single unified system." Intel 80C187 datasheet, p. 3, November 1992 (Order Number: 270640-004).

Jump up ^ Priorartdatabase.com

You might also like