How Chips Are Designed

Download as pdf or txt
Download as pdf or txt
You are on page 1of 46

Introduction to ASIC Design

How chips are designed?


Muhammad Ali Raza Anjum

How Chips Are Designed


Old-Style Design Process New-Style Design Process Verifying the Design Works Using Outside IP Getting to Tape Out and Film Current Problems and Future Trends

How Chips Are Designed


the first semiconductor chips were assembled, literally, by hand New chips today are designed in a completely different manner from those of just 10 years ago it's almost certain that the job description will change again in another 10 years. The computers that chip-design engineers use are not fundamentally different from a normal PC, and even the specialized software would not look too alien to a casual PC user. That software is breathtakingly expensive, however, and supports a multibillion-dollar industry all by itself. Called electronic design automation (EDA) Chip-design engineers rely on their computers and their EDA software "tools" the way a carpenter relies on a collection of specialized tools

How Chips Are Designed


There are a handful of large EDA vendors, notably:
Synopsys (Mountain View, California), Mentor Graphics (Portland, Oregon), and Cadence Design Systems (San Jose, California)

There are also numerous smaller "boutique" EDA vendors that supply special-purpose software tools to chip designers in niche markets. Regardless of the tools a chip designer uses, the goal is always the same: to create a working blueprint for a new chip and get it ready for manufacturing. The film (which is really not film any more, as we shall see) is used by chip makers in their factories to actually manufacture the chip

Old-Style Design Process


Before personal computers, EDA, and automated tools, engineers originally used rubylith, a red plastic film sold by art-supply houses that is still used today by sign makers and graphic artists Early chip designers would cut strips of rubylith with Xacto knives and tape them to large transparent sheets hanging on their wall Each layer of silicon or aluminum in the final chip required its own separate sheet, covered with a crisscrossing pattern of taped-on stripes By laying two or more of these transparent sheets atop one another and lining them up carefully, you could check to make sure that the rubylith from one touched the rubylith from another at exactly the right points.

Old-Style Design Process


Or, you'd make sure the tape strips didn't touch, creating an unwanted electrical short in the actual chip This was painstaking work, to be sure, but such are the tribulations of the pioneers. The whole process was a bit like designing a tall building by drawing each floor on a separate sheet and stacking the sheets to be sure the walls, wiring, stairs, and plumbing all match up precisely. This task was called taping out, for reasons that are fairly obvious Once you'd taped out, you were nearly done. About the only thing left to do was to wait for the chip to be made

Old-Style Design Process


One last step remained, however, before you could get excited about waiting for silicon You had to make film from your oversized rubylith layers This was a simple photographic reduction process. Each rubylith-covered layer was used as a mask, projecting criss-crossed shadows onto a small film negative. It works just like a slide projector showing vacation snapshots on a big screen, but instead of making the images bigger, the reduction process makes them smaller. Each separate rubylith layer is projected onto a different film negative that is exactly the size of the chip itself, less than one inch on a side. Now you have a film set that you can send for fabrication.

New-Style Design Process


Today the process is radically different Modern chip-design tools have done away with the errorprone manual work of taping up individual layers, but they only shift the workload Modern multimillion-transistor chips supply more than enough new challenges to make up the difference Modern chips are far too complex to design manually No single engineer can personally understand everything that goes on inside a new chip Even teams of engineers have no single member who truly understands all the details and nuances of the design. One person might manage the project and command the overall architecture of the chip, but individual engineers will be responsible for portions of the detailed design

New-Style Design Process


They must work together as a team, and they must rely on and trust each other as well as their tools Those tools are all computer programs No single EDA tool can take a chip design from start to finish The assortment is important, and using a mixture of EDA tools is part of an engineer's craft Schematic diagrams are the time-honored way of representing electrical circuits Schematics, or wiring diagrams, are like subway maps They draw, more or less realistically, the actual arrangement of wires and components (lines and stations). Schematics are accurate maps of circuit connections

New-Style Design Process


schematics are drawn on a computer screen using schematic-capture software. The schematic-capture software takes care of the simpler annotation chores like labeling each function and making sure no wires are left dangling in midair Schematic-capture software is a bit like word-processing software in that it won't provide inspiration or talent or create designs from nothing, but it will catch common mistakes and keep your workspace free from embarrassing erasure marks. number of companies supply schematic-capture software, but most vendors are small firms and their ranks are dwindling Modern chips are too complex to be designed this way, not because the schematic-capture programs can't handle it, but because the engineers can't

New-Style Design Process


Designing a multimillion-transistor chip using schematiccapture software would be like painting a bridge with a tiny artist's brush and palette There's just far too much area to cover and not enough time. To alleviate some of the tedium of designing a chip bit by bit, engineers have turned to a new method, called hardware synthesis Chips are not magically synthesized from thin air Instead, engineers feed their computers instructions about the chip's organization and the computer generates the detailed circuit designs

New-Style Design Process


The engineer still has to design the chip, just not at such a detailed level It's like the difference between describing a brick wall, brick by brick and inch by inch Or telling an assistant, "Build me a brick wall that's three feet high by 10 feet long." If you have an assistant you trust who is skilled in bricklaying, you should get the same result either way Theoretically, that's true of hardware synthesis as well, but the reality is somewhat different Although today's chip designers have overwhelmingly adopted hardware synthesis for their work, they still grumble about the trade-offs

New-Style Design Process


For example, synthesized designs tend to run about 20 to 30 percent slower than "handcrafted" chip designs Instead of running at 500 MHz, a synthesized chip might run at only 400 MHz Another drawback of hardware synthesis is that the resulting chips are often about 30 to 50 percent larger in terms of silicon real estate More silicon means more cost Bigger chips mean lower manufacturing yields Once again, there are certain chip makers that don't use hardware synthesis because they're shaving every penny of cost possible.

New-Style Design Process


Third, chips made from synthesized designs tend to use more electricity than do manually designed chips That's a side effect of the larger silicon size previously mentioned More silicon means more power drained, and the difference can be 20 percent or more. For extremely low-power chips, such as the ones used in cellular telephones, handcrafted chips are still popular Despite all these serious drawbacks, most new chips are created from synthesized designs simply because there's no other way It takes a long time to finish a cathedral if you're laying every brick by hand

New-Style Design Process


Companies will often use both design styles, synthesis and handcrafting. The first generation of a new chip will usually be designed with hardware-synthesis languages (described later) to get the chip out the door and onto the market as quickly as possible After the chip is released (assuming that it sells well), the company might order its engineers to revise or redesign the chip, this time using more labor-intensive methods to shrink the silicon size, reduce power consumption, and cut manufacturing costs. This "second spin" of the chip will often appear six to nine months after the first version. When microprocessor makers announce faster, upgraded versions of an existing chip, this is often how the new chip was created

New-Style Design Process


Schematics are fine, up to a point, but they're too detailoriented for large-scale designs, which modern chips have become. Instead, engineering teams need something that's more high-level and more abstract; something that allows them to think big Enter hardware-description languages (HDLs), the next step up the evolutionary ladder from schematic-capture programs. HDLs also enable hardware synthesis Using an HDL, engineering teams can design the behavior of a circuit without exactly designing the circuit itself in detail The HDL tool will translate the engineers' wishes into a circuit design

New-Style Design Process


Instead of drawing figures on a screen (schematic-capture), using an HDL is more like writing If schematics are blueprints, HDLs are recipes For a large chip, this procedural description can be hundreds of thousands of lines long, as long as a novel The two most common HDLs are called VHDL and Verilog Both of these languages were developed in the United States Interestingly, there is a definite geographic division between VHDL users and Verilog users. Verilog aficionados seem to be clustered around the western United States and Canada, whereas VHDL holds sway in Europe and New England

New-Style Design Process


Verilog is a few years older than VHDL. First developed in 1983, Verilog was for some time a proprietary HDL belonging to Cadence Design Systems VHDL, on the other hand, was created as an open language and became an Institute for Electrical and Electronics Engineering (IEEE) standard in 1987 Sensing that VHDL's standard status would jeopardize its investment in Verilog, Cadence put its HDL in the public domain in 1990 and applied for IEEE approval, which it gained in 1995 Because of their age and rapid increases in chip complexity, both languages have begun to crack a little bit under the strain of modern chip design Some engineers argue that designing a 10-milliontransistor chip using VHDL or Verilog is little better than the rubylith methods from the 1970s

New-Style Design Process


HDLs come and go, but a few that seem to have reached critical mass in the EDA market are Superlog, Handel-C, and SystemC. Superlog, as the name implies, is a pumped-up version of Verilog. It's a superset of the language that adds some higher lever, more abstract features to the language Superlog allows designers experienced in Verilog to handle larger chip designs without going crazy with details Handel-C and SystemC are both examples of a more radical approach to HDLs they use the C programming language to define hardware as well as software.

New-Style Design Process


They take the heretical stance that, because popular HDLs like Verilog and VHDL are already programming languages, why not use an actual programming language in their place? This has the advantage that millions of programmers already know C programming, and thousands more students learn it every year. Programming languages such as C, BASIC, Java, and the rest were never designed to createor even adequately describehardware Even the strongest backers of the C-as-hardwarelanguage movement realize that the original C programming language isn't well suited to the job

New-Style Design Process


For example, the ability of electronic circuits to perform multiple functions simultaneously, which programming languages like C can't describe They've all added various extensions to the language to help express parallelism Many have also added "libraries" of common hardware functions so that engineers don't have to create them from scratch

Verifying the Design Works


Assuming the chip eventually makes it through place and route in one piece, it would normally be ready to send to the foundry for manufacturing However, because the cost of tooling up a foundry to make a new chip is so expensive (in the neighborhood of $500,000) it's vitally important that everyone involved in its design convince themselves that there are no remaining bugs There are few things more expensiveor more damaging to one's careerthan a brand new chip that doesn't work The enormous costs of chip manufacturing have spawned a sub industry of companies providing tools and tests to verify complex chips without actually building them

Verifying the Design Works


The business opportunity for these companies lies in charging only slightly less than the cost of a bad chip All these verification tools work by simulating the chip before it's built. As with many things, the quality of the simulation depends on how much you're willing to spend. There are roughly three levels of simulation and some engineering teams make use of all three Others make do with just the simplest verification, not because they want to but because they can't afford anything more complete.

Verifying the Design Works


The first and easiest type of simulation is called C modeling As you might guess, this consists of writing a computer program in C that attempts to duplicate the features and functions of the chip's hardware design. The holes in this strategy are fairly obvious
If the program isn't really an accurate reflection of the chip design, then it won't accurately reflect any problems, either. This strategy also requires essentially two parallel projects: the actual chip design, and the separate task of writing a program that's as close to the chip as the programmer can make it.

The major upside of this approach, and the reason so many engineering teams use it, is its speed

Verifying the Design Works


It's quick to compile a C program (a few minutes at most) and it's quick to run one. Within an hour or so, the engineers could have a good idea of whatever shortcomings their chip might have. Many chip-design teams create a new C model every day and run it overnight; the morning's results determine the hardware team's task for the rest of the day. There aren't really any commercial C verification tools they're simply standard computer programs that happen to model the behavior of a chip under development. They're written, compiled, and run on entirely normal PCs or workstations.

Verifying the Design Works


A significant step up from software simulation of the chip is hardware simulation Despite the name, hardware simulation doesn't usually require any special hardware. Instead, a computer picks through the original hardware description of the chip (usually written in VHDL or Verilog) and attempts to simulate its behavior This process is painfully slow but quite accurate. Because it uses the very HDL description that will be used to create the chip, there's no quibbling over the fidelity of the mode Unfortunately, hardware simulation is so slow that large chips are often simulated in chunks, instead of all at once, and this introduces errors

Verifying the Design Works


If there are problems when Part A of the chip communicates with Part B, and these two chunks are simulated separately, the hardware simulator won't find them The alternatives are to let the simulation run for many days or to buy a faster computer. Regrettably, the larger the chip, the greater the need for accurate simulation, but the longer that simulation will take

Verifying the Design Works


The non plus ultra of chip verification is an emulator box This is a relatively large box packed full of reprogrammable logic chips, row after row To emulate a new chip under development, you first download the complete netlist of the chip into the emulator box The box then acts like a (much) larger and (much) slower version of the chip The advantages of this system are many. First, the emulator is real hardware, not a simulation, so it behaves more or less like the new chip really will. The major exception is speed: Emulator boxes run at less than 1 percent of the speed of a real chip, but that's still much faster than a hardware simulator.

Verifying the Design Works


Third, emulator boxes generally emulate an entire chip, not just pieces of it Naturally, emulating larger chips requires a larger box, and commensurately more expense, but at least it's possible Because the emulator box is real hardware, you can connect it to other devices in the "real" world outside the box. For example, you can connect an emulated video chip to a real video camera to see how it works Finally, the emulator can be used and reused at different stages in the chip's development, or even for different chips Emulator boxes are so expensive that they generally are treated as company resources

Using Outside IP
One of the quickest ways to design a new chip is to not design it at all. Most big new chips include a fair amount of reused, borrowed, licensed, or recycled circuitry; not physically recycled silicon, of course, but recycled design ideas Like a musician composing new variations on an old theme it's often better to borrow and adapt than to create from scratch. In engineering circles this is called design reuse or intellectual property (IP) reuse IP is a high-sounding name for intangible assets that can be sold, borrowed, or traded

Using Outside IP
Lawyers use IP to refer to trademarks, logos, musical compositions, software, or nearly anything else that's valuable but insubstantial. Engineers use IP to refer to circuit designs. Since the mid-1990s there has been a growing market for third-party IP: circuit designs created by an independent engineering company solely for the purpose of selling to other chip designers As chip designs have grown ever more complex, the demand for this IP has grown and encouraged the supply. There are a few profitable "chip" companies that not only don't have their own factories, they don't even have their own chips. They license partial chip designs to others and collect a royalty.

Using Outside IP
To makers of large chips, it's an attractive alternative to buy parts of the design from outside IP suppliers. There are dozens of large IP vendors and hundreds of smaller ones Some IP houses specialize in large (and valuable) types of circuit designs, such as entire 32-bit microprocessors, for which customers gladly pay more than $1 million in licensing fees, plus years of royalties down the road There are also sources of free IP on the Internet, just as there are for free software Although outside IP is a great boon to productive chip design, it isn't all smooth sailing

Using Outside IP
First, customers often balk at the prices Outside IP is designed to appeal to the largest possible audience, so it's naturally somewhat generic Potential customers might be looking for something more specific that outside IP doesn't offer. It's also possible that the IP won't be delivered in a form that the customer can use If the bulk of the chip is being designed using Verilog, but the IP is delivered in VHDL, it's probably not usable. For IP firms that deliver their products as HDL, there's about a 50 percent chance of getting it wrong.

Using Outside IP
If the chip design fails its final verification, does the fault lie with the licensed IP or with the original work surrounding it Neither side will be eager to admit liability Usually these problems have their root in ambiguous or insufficient specifications or a misunderstanding between engineers on either side. Among IP vendors and customers, one of the first questions asked is, "Does this IP come in hard or soft form? What the customer is asking is whether the circuit design will be delivered in a high-level HDL such as Verilog or VHDL, or as an already-synthesized "hard" design ready for manufacturing

Using Outside IP
IP that's delivered in "soft" form (i.e., as an HDL description) is generally easier for the customer (the chip-design team) to work with But it will have all the same drawbacks that all synthesized hardware suffers: larger size, slower speed, and higher cost when it's manufactured. On the plus side, the HDL is probably easier to integrate into the rest of the chip design, especially if the rest of the chip is also being designed with the same HDL IP vendors sometimes don't like to provide their wares in HDL form for two reasons. First, the usual drawbacks of synthesized hardware might make their product look bad

Using Outside IP
The IP vendor can't guarantee, for example, that its circuit will run at 500 MHz because it can't control the synthesis process. Consequently, the vendor winds up being very conservative and cagey about promising hard performance numbers. Unlike chip companies, IP companies rarely advertise MHz numbers (or power usage) in their literature Another aspect of soft IP that gives the vendors headaches is the possibility of piracy and IP theft Many IP vendors simply do not offer synthesizable IP cores for exactly that reason.

Using Outside IP
The alternative to soft IP is, of course, hard IP. A "hard" IP product (often called a core) isn't really hard. It's just the same as a soft core after it's been synthesized, placed, and routed. It's film, or the electronic equivalent of film. A hard core is nearly ready for manufacturing All it needs is to be dropped into the rest of the chip design at the last minute. There are good and bad aspects to using hard IP.They're generally faster, smaller, and use less power than synthesized soft cores because they've been extensively hand-tuned

Using Outside IP
On the other hand, hard IP cores are much more difficult to incorporate into the rest of the chip, especially if that chip is being synthesized. Essentially, the chip's designers have to leave a rectangular hole in their design that exactly fits the hard IP core. Then, during the late stages of preparation before manufacturing, the hard IP core is inserted. Hard IP cores prevent the customer's engineers from altering or modifying the core in any way, which is partially the point. IP vendors like delivering hard IP cores because they know their customers won't have access to the "recipe" for the IP.

Using Outside IP
Early IP vendors in the mid-1990s almost always delivered hard IP cores. The trend has moved more toward soft IP in recent years, however. As more and more engineering teams use hardwaresynthesis and HDLs for their own chips, they demand the same from their outside IP vendors. IP vendors might cringe because of the performance and security issues that soft IP raises, but the alternative is to lose business. Various industry groups have looked at ways to protect soft IP by "watermarking" the circuit design or through public-key encryption of the data files, but so far these efforts have produced little

Using Outside IP
Another form of IP, although a much less glamorous one (if that's possible), is the physical library. These fit in toward the end of a chip's design, as it's being synthesized, placed, and routed. As such, these libraries change from one semiconductor manufacturer to another, and often from one building or site to another within the same company. In the later stages of a chip's design, the EDA tools need to know exactly how thick, how wide, and how high each silicon transistor will be, and these characteristics vary slightly from maker to maker

Getting to Tape Out and Film


After the design, synthesis, testing, simulation, and verification are all done, the final steps are almost anticlimactic Once the engineering team is satisfied that the chip design will work, it's simply a matter of pressing a button to tape out the new chip Far from the old days when each layer of silicon and metal was literally taped out by hand, tape out now consists of a few moments for a computer to produce a file and store it on a CD-ROM Often engineers will print out these files and hang up the colorful poster-sized prints of the chip's design

Getting to Tape Out and Film


Film is no longer really film anymore. The file produced by the EDA software is called a GDS-II database, and it takes the place of actual film. Transporting a film box to the foundry is passed Uploading the GDS-II database is as simple as sending an e-mail.

Current Problems and Future Trends


Moore's Law provides us with a rich 58 percent compound annual interest rate on our transistor budgets That's the same as doubling every 18 months Given that it frequently takes more than 18 months to design a modern chip Chip designers are presented with a daunting challenge Take the biggest chip the world has ever seen and design something that's twice that big. Do it using today's tools, skills, and personnel, knowing that nobody has ever done it before. When you're finished, rest assured your boss will ask you to do it again

Current Problems and Future Trends


The modern EDA process is running out of steam Humans are not becoming noticeably more productive, but the tasks they are given (at least, those in the semiconductor engineering professions) get more difficult at a geometric rate Faster computers will help run the current EDA tools faster. Faster synthesis, simulation, modeling, and verification will all speed up chip design by a little bit. However, even if computers were to get 58 percent faster every year (they don't), that would only keep pace with the increasing complexity of the chips they're asked to design. At best, we'd be marking time. In reality, chip designers are falling behind

Current Problems and Future Trends


A change to testing and verification would provide a small boost Testing now consumes more than half of some chips' project schedules. To cut this down to size, some companies have strict rules about how circuitry must be designed, in such a way that testing is simplified. Better still, these firms encourage their engineers to reuse as much as possible from previous designs, the assumption being that reused circuitry has already been validated. Unfortunately, reused circuitry has only been validated in a different chip; in new surroundings the same circuit might behave differently

Current Problems and Future Trends


IP is another angle on the problem. Instead of helping designers design faster, this helps them design less Reuse, renew, and recycle is the battle cry of many engineering managers Engineers tend to design larger and more complex chips than they otherwise might have if they know they don't have to design every single piece of it themselves Although that makes the resulting chips more powerful and feature-packed, it doesn't shorten the design cycle at all With no clear winner and no clear direction to follow, the future of chip design will be determined largely by herd instinct, Engineers will always choose what they feel most comfortable with

You might also like