Open navigation menu
Close suggestions
Search
Search
en
Change Language
Upload
Sign in
Sign in
Download free for days
0 ratings
0% found this document useful (0 votes)
252 views
Advanced Asic Chip Synthesis Using Synopsys 1999
Uploaded by
Sri Krishna Chaitanya
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download now
Download
Save Advanced Asic Chip Synthesis Using Synopsys 1999 For Later
Download
Save
Save Advanced Asic Chip Synthesis Using Synopsys 1999 For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
0 ratings
0% found this document useful (0 votes)
252 views
Advanced Asic Chip Synthesis Using Synopsys 1999
Uploaded by
Sri Krishna Chaitanya
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download now
Download
Save Advanced Asic Chip Synthesis Using Synopsys 1999 For Later
Carousel Previous
Carousel Next
Save
Save Advanced Asic Chip Synthesis Using Synopsys 1999 For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
Download now
Download
You are on page 1
/ 149
Search
Fullscreen
Contents Foreword Preface Acknowledgements About The Author CHAPTER 1: ASIC DESIGN METHODOLOGY. ‘Typical Design Flow 1 Specification and RTL Coding 2 Dynamic Simulation 3 Constraints, Synthesis and Scan Insertion 4 Formal Verification Ss Static Timing Analysis using PrimeTime 6 Placement, Routing and Verification 7 Engineering Change Order Chapter Summary avi oxxiif oy2.1 Example Design 2.2 Initial Setup 2.3 Pre-Layout Steps 23.1 Synthesis 23.2 Static Timing Analysis using PrimeTime 23.3. SDF Generation 234 Verification 2.4 — Floorplanning and Routing 2.5 Post-Layout Steps 25.1 Static Timing Analysis using PrimeTime 25.2 Post-Layout Optimization 2.6 Chapter Summary CHAPTER 3: BASIC CONCEPTS 3.1 Synopsys Products 3.2 Synthesis Environment 3.21 Startup Files 3.2.2 System Library Variables 3.3, Objects, Variables and Attributes 33.1 Design Objects 3.32 Variables 33.3 Attributes 34 Finding Design Objects 3.5. Synopsys Formats 3.6 Data Organization 3.7 Design Entry 3.8 Compiler Directives 3.8.1 HDL Compiler Directives 3.8.2 VEDL Compiler Directives 3.9 Chapter Summary CHAPTER 4; SYNOPSYS TECHNOLOGY LIBRARY 4.1 Library Basies 4.1.1 Library Group 4.1.2 Library Level Attributes 16 7 18 18 26 28 35 39 2 43 45. 45 47 48 48 49 31 52 33 34 35 56 37 60 2 63 64 64 64 Contents 4.13 Environment Description 63 4.14 Cell Description 0 42 Delay Calculation B 42.1 Delay Model B 422 Delay Calculation Problems 8 4.3. Whatis a Good Library? 16 44° Chapter Summary B CHAPTER 5; PARTITIONING AND CODING STYLES _78 5.1 Partitioning for Synthesis 30 3.2 Whatis RTL? 82 32.1 Software versus Hardware 82 33 General Guidelines 83 33.1 Technology Independence 83 53.2 Clock Logie 83 53.3 NoGlue Logie at the Top 84 5.34 Module Name Same as Fite Name 85 5.3.5 Pads Separate from Core Logic 85 53.6 Minimize Unnecessary Hierarchy 85 53.7 Register All Outputs 85 5.3.8 Guidelines for FSM Synthesis 36 S4 Logic Inference 86 54.1 Incomplete Sensitivity Lists 86 54.2 Memory Element Inference 87 34.3 Multiplexer Inference 2 S44 Three-State Inference 95 55 Order Dependency 96 5.5.1 Blocking versus Non-Blocking Assignments in Verilog 96 3.5.2. Signals versus Variables in VDL ” 5.6 Chapter Summary 8 CHAPTER 6: CONSTRAINING DESIGNS. 99 CHAPTER 6: CONSTRAINING DESIGNS ___99 6.1 Environment and Constraints 100 6.1.1 Design Environment 100 61.2 Design Constraints 105 6.2 Advanced Constraints 110 63 Clocking Issues 1363.1 PreLayout 6.3.2 Post-Layout 63.3 Generated Clocks 64 Putting it Together 65 Chapter Summary CHAPTER 7: OPTIMIZING DESIGNS. 7.1 Design Space Exploration 72 Total Negative Slack 7.3 Compilation Strategies 73.1 Top-Down Hierarchical Compile 732 Time-Budgeting Compile 7.3.3 Compile-Characterize-Write-Script-Recompile 74 — Resolving Multiple Instances 7.5 Optimization Techniques 75.1 Compiling the Design 75.2 Flattening and Structuring 753 Removing Hierarchy 75.4 Optimizing Clock Networks 753 Optimizing for Area 7.6 Chapter Summary CHAPTER 8: DESIGN FOR TEST. 8.1 Types of DFT 8.1.1” Memory BIST 8.1.2 Boundary Scan DFT 8.2 Scan Insertion 82.1 Making Design Scannable 8.2.2 Test Pattern Generation 83 DFT Guidelines 83.1 ‘Tri-State Bus Contention 832 Latches 83.3 Gated Reset or Preset 83.4 Gated or Generated Clocks 8.3.5 Use Single Edge of the Clock 83.6 Multiple Clock Domains 83.7 Order Scan-Chains to Minimize Clock Skew 14 us 116 uy 119 121 121 125, 126 Ra 128 130 134 BS 136 137 141 142 145 145 147 147 148 148 149 149 152 152 152 153 153 153 134 155 155 OS 83.8 Logic Un-Scannable due to Memory Element 3.4 Chapter Summary CHAPTER 9: LINKS TO LAYOUT & POST-LAYOUT OPT. 9.1 Generating Netlist for Layout 9.1.1 Uniquify 9.12 Tailoring the Netlist for Layout 9.13 Remove Uncomnected Ports 9.14 Visible Port Names 9.18 Verilog Specific Statements 9.16 Unintentional Clock or Reset Gating 9.7 Unresolved References 92 Layout 92.1 Floorplanning 9.22 Clock Tree Insertion 92.3 Transfer of Clock Tree to Design Compiler 92.4 — Routing 92.5 Extraction 9.3 Post-Layout Optimization 93.1 Back Annotation and Custorn Wire Loads 93.2 In-Place Optimization 93.3 Location Based Optimization 93.4 Fixing Hold-Time Violations 9.4 Future Directions 9.5 Chapter Summary CHAPTER 10: SDF GENERATIC 10.1 SDF File 10.2 SDF File Generation 10.2.1 Generating Pre-Layout SDF File 10.2.2 Generating Post-Layout SDF File 10.2.3 Issues Related to Timing Checks 10.24 False Delay Calculation Problem. 10.25 Putting it Together 10.3 Chapter Summary 156 158 159 160 161 163 164 164 165 166 167 167 167 172 176 178 178 183, 184 186 187 189 193, 194 195 196 198, 198, 201 202 203 205 207CHAPTER 11; PRIMETIME BASICS ILL Introduction ILLI Invoking PT 11.2 PrimeTime Environment 11.1.3. Automatic Command Conversion 112 Tel Basies 11.2.1 Command Substitution 312.2 Lists 11.2.3 Flow Control and Loops 113 PrimeTime Commands 113.1 Design Entry 113.2 Clock Specification 113.3 Timing Analysis Commands 113.4 Other Miscellaneous Commands 114 — Chapter Summary CHAPTER 12: STATIC TIMING ANALYSIS 12.1 Why Static Timing Analysis? 12.1.1 What to Analyze? 12.2. Timing Exceptions 12.2.1 Multicycle Paths 122.2 False Paths 12.3. Disabling Timing Ares 123.1 Disabling Timing Ares Individually 123.2 Case Analysis 12.4 Environment and Constraints 12.4.1 Operating Conditions ~ A Dilemma 125 Pre-Layout 12.5.1 Pre-Layout Clock Specification 1252 Timing Analysis 12.6 Post-Layout 12.6.1 What to Back Annotate? 12.6.2 Post-Layout Clock Specification 1263 Timing Analysis, 12.7 Analyzing Reports 12.7.1 Pre-Layout Setup-Time Analysis Report 12.7.2 Pre-Layout Hold-Time Analysis Report 209 210 210 210 2 212 213 213 215 215 215 216 221 227 230 231 231 232, 233 233 237 239 240 241 242 242 243 244 245 247 248 249 249 254 254 256 12.7.3 1274 128 12.8.1 12.8.2 1283 12.84 12.9 Post-Layout Setup-Time Analysis Report Post-Layout Hold-Time Analysis Report ‘Advanced Analysis Detailed Timing Report Cell Swapping Bottleneck Analysis. Clock Gating Checks ‘Chapter Summary 258 260) 262 262 265 266 269 mm 275 APPENDIX, 277 INDEXForeword ur semiconductor industry is increasingly characterized by accelerated product obsolescence. As a result, business success is increasingly dependent ‘upon the ability of development teams to deliver a shortest “time-to-market” product that meets customer requirements. Early product introduction means higher profit margins, lasting only until slower-to-market competitors enter and erode prives. This intense cycle of market price erosion has been particularly evident in the personal computer industry over the last few years. Consumers are continually demanding quality products at lower cost but with increasing features, Semiconductor suppliers are, in tum, driven to develop system-on- achip (SoC) products utilizing VDSM (Very-Deep-Sub-Micron) technologies, just to remain competitive. Several high performance tools and techniques have been developed over the past few years to mitigate somewhat this “time-to-market” pressure and to ‘enable rapid design updates to meet evolving customer specifications. These changes have resulted in a redefinition of standard ASIC design flow methodologies. High level design languages, like VHDL and Verilog, have displaced schematic capture, thus promoting design reuse. Dynamic simulation has given way to formal verification and static timing analysis. Inaddition, synthesis engines have become more sophisticated, targeting complex designs containing millions of gates and large IP cores. It is now estimated that the mumber of gates in a complex ASIC will approach 10 million early in the next decade. Successfully achieving these levels of integration in a time-to-market focused development environment will require an intimate knowledge of ASIC design tlow in the VDSM realm and a complex integration of products offered by multiple BDA tool vendors. This book, written by Himanshu. Bhatnagar, provides a comprehensive overview of the ASIC design flow targeted for VDSM technologies using the Synopsys suite of tools. It emphasizes the practical issues faced by the semiconductor design engineer in terms of synthesis and the integration of frontend and back-end tools. Traditional design methodologies are challenged and unique solutions are offered to help define the next- generation of ASIC design flows. The author provides numerous practical examples derived from real-world situations that will prove valuable to practicing ASIC design engineers as well as to students of advanced VLSI courses in ASIC design. Dr. Dwight W. Decker Chairman and CEO, Conexant Systems, Inc. (Formerly, Rackwell Semiconductor Systems) Newport Beach, California, USA Preface ‘This book describes the advanced concepts and techniques used toward ASIC chip synthesis, formal verification and static timing analysis, using the Synopsys suite of tools. In addition, the entire ASIC design flow methodology targeted for VDSM (Very-Deep-Sub-Micron) technologies is covered in detail. hasis of this book is on real-time application of Synopsys tools, used to cat ‘aous problems seen at VDSM geometries. Readers will be exposed to an effective design methodology for handling complex, sub- rnicion ASIC designs. Significance is placed on HDL coding, syle, synthesis and optimization, dynamic simulation, formal verification, sean insertion, links to Teyout, and state timing analysis. At each step problems related to each phase of the design flow are idea _ wi solutions and workarounds deseribed ia detail. In addition, enc istes related to layout, which inchides clock tee synthesis and back-end imegration (links fo layout are also discussed at length, Furthermore, book contains in-depth discussions on the basics of Synopsys technology libraties and HDL coding styles, targeted towards optimal synthesis solutien. i ers and Target audiences for this book are practicing ASIC design engineers an sradiate level students vaderaking advanced VLSI courses on ASIC chip design and DFT techniques.This book is not intended as a substitute or a replacement for the Synopsys reference manual, but is meant for anyone who is involved in the ASIC design flow. Also, it is useful for those designers (and companies) who do not have layout capability, or their own technology libraries, but rely on outside vendors for back-end integration and final fabrication of the device. ‘The book provides altcmatives to traditional method of netlist hand-off to outside vendors because of various issues related to VDSM technologies. It also addresses solutions to common problems faced by designers when interfacing various tools from different EDA tool vendors. Overview of the Chapters Chapter 1 presents an overview to various stages involved in the ASIC design flow using Synopsys tools. The entire design flow is briefly described, staring from concept to chip tape-out. This chapter is useful for designers ‘who have not delved in the full process of chip design and integration, but ‘would like to learn the full process of ASIC design flow. Chapter 2, outlines the practical aspects of the ASIC design flow as described in Chapter I. Beginners may use this chapter as a tutorial. Advanced users of Synopsys tools may benefit by using this chapter as a ceference. Users with no prior experience in synthesis using Synopsys tools should skip this chapter and return to it later after reading the remaining book. The basic concepts related to synthesis are described in detail in Chapter 3. These concepts introduce the reader (o synthesis terminology used throughout the later chapters. Readers will find the information provided here useful by gaining a basic understanding of these tools and their environment. In addition to describing the purpose of each tool and their setup, this chapter also focuses on defining objects, variables, attributes and compiler directives used by the Design Compiler. Chapter 4 describes the basics of the Synopsys technology library. Designers usually do not concem themselves with the full details of the technology library, as long as the library contains a variety of cells with different drive strengths. However, a sich library usually determines the quality of synthesis. Therefore, the intent of this chapter is to describe the Synopsys technology library from the designer's perspective. Focus is provided on delay calculation method and other techniques that designers may use in order 10 alter the behavior of the technology library, hence the quality of the synthesized design. Proper partitioning and good coding style is essential in obtaining quality results. Chapter § provides guidelines to various techniques that may be used 0 correctly partition the design in order to achieve the optimal solution. In addition, the HDL coding styles is covered in this chapter that illustrates numerous examples and provides recommendations to designers on how to code the design in order to produce faster logic and minimum area. ‘The Design Compiler commands used for synthesis and optimization are described in Chapter 6. This chapter contains information that is useful for the novice and the advanced users of Synopsys tools, The chapter focuses on real-world applications by taking into account deviations from the ideal simation ie, “Not all designs or designers, follow Synopsys recommendations”. The chapter illustrates numerous examples that help guide the user in real-time application of the commands. Chapter 7 discusses optimization techniques in order to meet timing and area requirements. Comparison between older version of Design Compiler and the new version is highlighted. Emphasis is provided on the new optimization technique employed by Design Compiler called “TNS”. Also, detailed analysis on various methods used for optimizing logic is presented. In addition, different compilation strategies, each with advantages and disadvantages are discussed in detail. DET techniques are increasingly gaining momentum among ASIC design ‘engineers. Chapter 8 provides a brief overview of the different types of DFT techniques that are in use today, followed by detailed description on how devices can be made scannable using Synopsys’s Test Compiler. It describes ‘commands used for inserting scan through Design Compiler. A multitude of guidelines is presented in order to alleviate the problems related to DFT scan insertion on a design. Chapter 9 discusses the Tinks to layout feature of Design Compiler. It describes the interface between the front-end and back-end tools. Also, this chapter provides different strategies used for post-layout optimization ofdesigns. This includes in-place and location based optimization techniques. Furthermore, a section is devoted to clock tree insertion and issues related to clock tree transfer to Design Compiler. Various solutions to this common problem are described. This chapter is extremely valuable for designers (and companies) who do not posses their own layout tool, but would like to learn the place and route process along with full chip integration techniques. Chapter 10, titled “SDF Generation: for Dynamic Timing Simulation” describes the process of generating the SDF file from Design Compiler or PrimeTime. A section is devoted to the syntax of SDF format, followed by detailed discussion on the process of SDF generation, both for pre and post layout phases of the design. In addition, few innovative ideas and suggestions are provided to facilitate designers in performing successful simulation. This, chapter is useful for those designers who prefer dynamic simulation method to formal verification techniques, in order to verify the functionality of the design Chapter 11 introduces to the reader, the basics of static timing analysis, using PrimeTime. This includes a brief section devoted to Tel language that is utilized by PrimeTime. Also described in this chapter are selected PrimeTime commands that are used to perform static timing analysis, and also facilitate the designer in debugging the design for possible timing violations. The key to working silicon usually lies in successful completion of static timing analysis performed on a particular design. This capability makes static timing analysis one of the most important steps in the entice design flow and is used by many designers as a sign-off eriterion to the ASIC vendor. Chapter 12 is devoted to several basic and advanced topics on static timing analysis, using PrimeTime. It effectively illustrates the usage of PrimeTime, botls for ‘the pre and the post-layout phases of the ASIC design flow process. In addition, numerous examples on analyzing reports and suggestions on various scenarios are provided. This chapter is useful to those who would like to migrate from traditional methods of dynamic simulation to the method of analyzing designs statically. It is also helpful for those readers who would like to perform in-depth analysis of the design through PrimeTime. Conventions Used in the Book All Synopsys commands are typed in “Ariel” font. This includes all examples that contain synthesis and timing analysis seripts. The command line prompt is typed in “Courier New" font, For exampie: dc_she11> and, pt_shell> Option values for some of the commands are enclosed in < and >. In general, these values need to be replaced before the command can be used. For example: > to
set_false_path -from
create_clock period 33 —waveform {0 16.5} tok dc_she11> set_clock_skew —delay 2.0 tck dc_shel1> set_clock_transition 0.2 tok ‘The following script uses the above commands and may be used to generate the pre-layout SDF for the fap_controller design. This SDF file is targeted for simulating the design dynamically with timing. In addition, the script also ‘generates the timing constraints file. Though this file is also in SDF format, it is solely used for forward annotating the timing information to the layout tool in order to perform timing driven layout. DC script for pre-layout SDF generation active_design = tap_controller read active_design + ".db" current_design active_design link. set_wire_load LARGE -mode top set_operating_conditions WORST create_clock ~period 33 waveform {0 16.5} tck set_clock_skew —delay 2.0 —minus_uncertainty 3.0 tck set_clock_transition 0.2 tckset_driving_cell cell BUFF 1X —pin Z all_inputs() set_drive 0 {tck trst} set_load 50 all_outputs() set_input_delay 20.0 -clock tck—max all_inputs() set_output_delay 10.0 -clock tck -max all_outputs() write_timing format sdf-v2.1\ output active_design + *.sdf" write_constraints ~format sdf -cover_design \ output constraints. sdf 23.4 Verification The final step before layout is to verify the structural netlist for functionality. ‘There are two methods that perform this step: a) Dynamic timing simul: b) Formal verification, using the SDF file. 2.3.4.1 Gate-Level Simulation with Pre-Layout SDF This topic needs no clarification, as everyone in the ASIC world is well versed to the process of dynamic simulation, Simulating the structural netlist involves exercising combinations of input signals, while observing the outputs. This is generally performed by means of a test-bench coded in Verilog or VHDL, which emulates the whole system. In other words, it incorporates/envelops the tap_controller design. Dynamic simulation of the gate-level is the traditional approach and is still regarded by many as the best way to debug the design, and to check for functionality at the same time. However, this approach has some shortfalls. Primarily that certain paths of the logic in the design may never be active EEO a (get sensitized), for the specified combination of input signals. This means that these paths will not be checked for correct functionality and timing. ‘Another problem with the dynamic simulation method is the run-time for large designs containing millions of gates. The run-time for these designs can be enormous. It may even become impractical to run gate-level simulatiens con such designs. The concept of formal verification was introduced to overcome these issues, which are discussed in the next section, Let us assume that the tap_controller design is passing all the functional vectors with no setup or hold-time violations, using the pre-layout SDF generated previously. 2.3.4.2 Formal Verification Synopsys recently introduced a tool that is capable of performing formal verification, called Formality. Formality uses mathematical algorithms in order to check the logic equivalency of a design. Since this technique is static Gc. it does not use input test vectors), the verification run-times are reduced dramatically as compared to the dynamic simulation method. Furthermore, it also provides a powerful means to verify large designs in a comparatively short duration. However, the most significant advantage of using Formality is that complete verification of the design is achieved without omitting any logic paths (as is possible by dynamic simulation techniques). ‘This section (and the book) briefly outlines the purpose and importance of using formal verification techniques and where they can be applied. However, the syntax and usage of the commands used by Formality will not be described. Formality may be used to verify RTL against RTL, RTL against synthesized gate-level netlist, or gate-level against gate-level netlist. At this point, Formality should be used to verify the RTL against synthesized netlist to check for the functional validity of the gate-level netlist. Compared to gate~ level simulation using the pre-layout SDF, Formality takes a fraction of time to completely verify the design.eetlee teen 2.4 Floorplanning and Routing The floorplanning step involves physical placement of cells and clock tree synthesis. Both these steps are performed within the layout tool. The placement step may include timing driven placement of the cells, which is performed by annotating the constraints.sdf file (generated by DC) to the layout tool. This file consists of path delays that include the cell-to-cell timing information. This information is used by the layout tool to place cells with timing as the main criterion ie., the layout tool will place timing critical cells closer to each other in order to minimize the path delay. Let’s assume that the design has been floorplanned. Also, the clock tree has been inserted in the design by the layout tool. The clock tree insertion modifies the existing structure of the design. In other words, the netlist in the layout tool is different from the original netlist present in DC. This is because of the fact that the design present in the layout tool contains the clock tree, whereas the original design in DC does not contain this information, Therefore, the clock tree information should somehow be transferred to the design residing in DC or PT. The new netlist (containing the clock tree information) should be formally verified against the original netlist to ensure that the transfer of clock tree did not break the functionality of the original logic. Various methods of transferring the clock tree information to the design are explored in detail in Chapter 9. For the sake of simplicity, let us assume that the clock tree information is present in the tap_controller design. ‘The design is now ready for routing, In a broad sense, routing is performed in ‘two phases ~ global route and detailed route. During global route, the router divides the layout surface into separate regions and performs a point-to-point “Joose” routing without actually placing the geometric wires. The final routing is performed by the detailed router, which physically places the ‘geometric wires and routes within the regions. Full explanations of these types of routing are explained in Chapter 9. Lets assume that the design has been global routed. ‘The next step involves extracting the estimated parasitic capacitances, and RC delays from the global routed design. This step reduces the synthesis- layout iteration time, especially since cell placement and global routing may take much less time than detailed routing the entire chip. However, if the cells are placed optimally with minimal congestion, detailed routing is also very fast. In any case, extraction of delays after the global route phase (albeit estimates) provides a faster method of getting closer to the real delay values that are extracted from the layout database after the detailed routing phase. Back annotate the estimates to the design in PT for setup and hold-time static, timing analysis, using the following scripts. PT script for setup-time analysis, using estimated delays feeennecssennnasssseeeeesonenneesannoonenaeetees # Define top-level design name set active_design tap_controller pe neeaaeeneneananenaaaaaeaaaaan # Design entry in db format, netlist only — no constraints read_db -netlist_only Sactive_design.db current_design Sactive_design BAA AAMnan en eennnnnnnnatssssseees: # Design environment ‘set_operating_conditions WORST set_load 50.0 [all_outputs] set_driving_cell cell BUFF1X -pin Z [all_inputs] perneeeneasnnatennntennnneneeseessenencenensenses +# Back annotation of estimated data from floorplan source capacitance.pt_# estimated parasitic capacitances read_sdf rc_delays.sdf # estimated RC delays poteenenneseeenes # Clock specification and design constraints create_clock -period 33 -waveform [0 16.5] tck ‘set_propagated_clock [get_clocks tck] set_clock_uncertainty 0.5 -setup [get_clocks tck]Oe set_input_delay 20.0 ~clock tck fall_inputs} set_output_delay 10.0—clock tck [all_outputs] YANN enesenenannnnnneneateeseeeeeseseeeee® # Timing analysis commands report_constraint —all_violators report_timing —to {all_registers —data_pins] report_timing —to [all_outputs] foteeseseeeeesersssesscesneneeeeeeenneneneennenee # Define design name set active_design tap_controller feteeeeeeneneneeeeeeeneenneneeseseseeseseseeeeres # Design entry in db format, netlist only — no constraints, read_db -netlist_only $active_design.db current_design Sactive_design _feneneeneneenneennnaennenneneenneesesasnonannens # Design environment set_operating_conditions BEST set_load 20.0 fall_outputs} ‘set_driving_cell —cell BUFF1X —pin Z [all_inputs] petteeciteseeenassssseeeaseseennaassonnnnasenten # Back annotation of estimated data from floorplan source capacitance.pt_# estimated parasitic capacitances read_sdf ro_delays.sdf # estimated RC delays BNI ODES eens Jeeeeneeceeeeeeennnnnneasasennnnnngeeneesseeontes # Clock specification and design constraints create_clock —period 33 —waveform [0 16.5] tck set_propagated_clock [get_clocks tck] set_clock_uncertainty 0.05 —hold (get_clocks tck] set_input_delay 0.0 -clock tek [all_inputs] set_output_delay 0.0 —clock tok fall_outputs} {PH ERE U MAHA HEHE +# Timing analysis commands report_constraint -all_violators report_timing -to [all_registers -data_pins]_\ —delay_type min report_timing -to [all_outputs] —delay_type min ‘The above script back annotates capacitance pt and re_delays.sdf file. The capacitance.pt file contains the capacitive loading per net of the design in setload format, while the rc_delays.sdf file contains point-to-point interconnect RC delays of individual nets. DC (and PT) performs the calculation of cell delay, based upon the output net loading and input slope of each cell in the design. The reason for using this approach is explained in detail in Chapter 9. If the design fails setup-time requirements, you may re-synthesize the design with adjusted constraints or re-floorplan the design. If the design is failing hold-time requirements, then depending on the degree of violation you may decide to proceed to the final step of detailed routing the design, or re- optimize the design with adjusted constraints. If re-synthesis is desired then the floorplan (placement) information consisting of the physical clusters and cell locations, should be back annotated to DC. This step is desired because up till now, DC did not know the physical placement information of cells. By annotating the placement information to DC, the post-layout optimization of the design within DC isNO vastly improved. The layout tool generates the physical information in PDEF format that can be read by DC, using the following command: read_clusters
‘The script to perform this is similar to the initial synthesis script, with the exception of the annotated data and incremental optimization of the design, as illustrated below: Script for incremental synthesis of the design active_design = tap_controller read active_design + “.db" current_design active_design link include capacitance.dc_/* estimated parasitic capacitances */ read_timing -f sdf rc_delays.sdf_ /* estimated RC delays */ read_clusters clusters.pdef /* physical information */ r /* create custom wire-load models for next i create_wire_load -hierarchy =percentile 80 output cwim.txt veneweewenunannennsannessesstene] tion of synthesis */ create_clock -period 33 -waveform {0 16.5} tck set_clock_skew -propagated ~minus_uncertainty 3.0 tek set_dont_fouch_network {tck trst} set_driving_cell ~cell BUFF 1X -pin Z all_inputs() set_drive 0 {Ick trst} set_input_delay 20.0 -clock tck-max all_inputs() set_output_delay 10.0 -clock tck-max all_outputs() set_max_area 0 set_fix_multiple_port_nets —all -buffer_constants revenwasanannsunscennnnnnsssenecannnnsantnt /* Re-optimize the design to control the amount of modifications */ reoptimize_design -in_place write hierarchy ~output active_design + *.db" write -format verilog -hierarchy \ output active_design + "sv" ‘The create_wire_load command used in the above script creates a custom wire-load model for the fap_controiler design. The initial synthesis run used the wire-load models present the in the technology library that are not design specific. Therefore, in order to achieve better accuracy for the next synthesis iteration, the custom wire-load models specific to the design should be used. ‘The following command may be used to update the technology library present in DC’s memory to reflect the new custom wire-load models. For example: dc_shel1> update_lib ex25_worst.db cwlm.txt Let's assume that the design has been re-analyzed and is now passing both setup and hold-time requirements. The next step is to detail route the design. This is a layout dependent feature, therefore will not be discussed here. 25 Post-Layout Steps The post-layout steps involve, verifying the design for timing with actual delays back annotated; functional simulation of the design; and lastly, performing LVS and DRC. Let us presume that the design has been fully routed with minimal congestion and area. The finished layout surface must then be extracted to get the actual Parasitic capacitances and interconnect RC delays. Depending upon thelayout tool and the type of extraction, the extracted values are generally written out in the SDF format for the interconnect RC delays, while the parasitic information is generated as a string of set_load commands for each net in the design. In addition, if a hierarchical place and route has been performed, the physical placement location of cells in the PDEF format should also be generated. 2.5.1 Static Timing Analysis using PrimeTime ‘The first step after layout is to perform static timing on the design, using the actual delays. Similar to post-placement, the post-route timing analysis uses ‘the same commands, except that this time the actual delays are back annotated to the design. Predominantly, the timing of the design is dependent upon clock latency and skew. It is therefore prudent to perform the clock skew analysis before attempting to analyze the whole design. A useful Tel script is provided by Synopsys through their on-line support on the web, called SolvNET. You may download this script and run the analysis before proceeding. Let us assume that the clock latency and skew is within limits. The next step is to perform the static timing analysis on the design, to check the setup and hold~ time violations (iff any) using the following scripts: PT script for setup-time analysis, using actual delays JA eS oO sene tase eRRNAEE EE # Define top-level design name set active_design tap_controller _eteeenenaaasseneaaanennneensannnnaconensenessees # Design entry in db format, netlist only — no constraints read_db~netlist_only Sactive_design.db current_design Sactive_design pesteeesenessennnaaeenssennessnnaaennnsenscennnes # Design environment sel_operating_conditions WORST set_load 50.0 [all_outputs] set_driving_cell -cell BUFF1X -pin Z [all_inputs] PeRReeeeeeeeeseeeenenateseeneneeeenaseeeetene +# Back annotation of real data from the routed design source capacitance.pt # actual parasitic capacitances read_sdf ro_delays.sdf # actual RC delays read_parasitics clock_info_wrst.spf # for clocks ete. fARenaaeennanaaaennnnnnaaennnnnnasannenaaseseneee # Clock specification and design constraints create_clock -period 33 -waveform [0 16.5] tck set_propagated_clock (get_clocks tck] set_clock_uncertainty 0.5 —setup [get_clocks tck] set_input_delay 20.0 -clock tok [all_inputs] set_output_delay 10.0-clock (ck [all_outputs] |OeeeeeeReRRRRRERRRERRREEEEE EEE E SEERA EAA AHHH AHHH # Timing analysis commands report_constraint —all_violators report_timing —to [all_registers —data_pins] report_timing —to [all_outputs] PF script for hold-time analysis, using actual delays PEE aiiieeeeeeniet # Define top-level design name set active_design tap_controllerHAAAAAAAN ANE e ee aamecccenennannseeeeeETTT He tte # Design entry in db format, netlist only — no constraints read_db -netlist_only Sactive_design.db current_design $active_design HAMMAAAAAN ESRD eee eneeeeeeeeeeeeneteeeTETEHEHHHe® # Design environment set_operating_conditions BEST AAAs eanteecesesenneneeaeeaesaaaens # Back annotation of real data from the routed design source capacitance.pt # actual parasitic capacitances read_sdfrc_delays.sdf # actual RC delays read_parasitics clock_info_best.spf # for clocks ete. set_load 50.0 [all_outputs] set_driving_cell -cell BUFF1X -pin Z [all_inputs] AeteAt eee tonReeenenemnnnennneetnneeennenenesnees # Clock specification and design constraints create_clock ~period 33 -waveform [0 16.5] tck set_propagated_clock [get_clocks tck] set_clock_uncertainty 0.05 —hold [get_clocks tck} set_input_delay 0.0 -olock tck [all_inputs) set_output_delay 0.0 -clock tek [all_outputs} Ps erreseemeseIE En nneneteseiee # Timing analysis commands report_constraint ~all_violators report_timing -to [all_registers -data_pins] \ ~delay_type min report_timing -to [all_outputs] —delay_type min 25.2 Post-Layout Optimization The post-layout optimization or PLO may be performed on the design to improve or fix the timing requirements. DC provides several methods of fixing timing violations, through the in-place optimization (or IPO) feature. As before, DC also makes use of the physical placement information to perform location based optimization (LBO). In this example, we will use the cell resizing and buffer insertion feature of the IPO to fix the hold-time violations. 2.5.2.1 Fixing Hold-Time Violations The design was synthesized for maximum setup-time requirements. Timing was verified at each step (after synthesis and then, after the global route phase), therefore in all probability the routed design will pass the setup-time requirements. However, some parts of the design may fail hold-time requirements at various endpoints. If the design fails the hold-time requirements then you should fix the violations by adding buffers to delay the arrival time of the failing signals, with respect to the clock. Let’s assume that the design is failing hold-time requirements at multiple endpoints. There are various approaches to fix the hold-time violations. Such methods are discussed in detail in Chapter 9. In this example, we will utilize the dc_shell commands to fix the hold time violations, as illustrated below: DC Seript to fix the hold-time ns active_design = tap_controller read active_design + “.db" current_design active_design linkinclude capacitance.de_/*actual parasitic capacitances */ read_timing -f sdf rc_delays.sdf — /*actual RC delays */ read_clusters clusters.pdef _/*physical hierarchy info */ create_clock —period 33 -waveform {0 16.5} tek set_clock_skew -propagated —plus_uncertainty 0.05 tck set_dont_touch_network {tck trst} set_driving_cell -cell BUFF1X -pin Z all_inputs() set_drive 0 {tck trst} set_input_delay min 0.0 -clock tok -max all_inputs() set_output_delay —min 0.0 -clock tek -max all_outputs() set_fix_hold tck /* fix hold-time violations w.r.t. tek */ reoptimize_design -in_place write ~hierarchy output active_design + "db" write format verilog -hierarchy \ ~output active_design + *.sv" In the above script, the set_fix_hold command instructs DC to fix hold-time violations with respect to the clock fok. The ~in_place argument of the reoptimize_design command is the IPO command, which is regulated by various variables that are described in Chapter 9. Making use of these variables, DC inserts or resizes the gates to fix the hold time violations. The LBO variables are helpful in inserting the buffers at the correct location, so as to minimize its impact on some other logic path, leading off from the violating path. After IPO, the design should again be analyzed through PT to ensure that the Violations have been fixed using the post-layout PT script illustrated before. SS care Once the design passes all timing requirements, the post-layout SDF may be generated (from PT or DC) for simulation purposes, if needed. We will use DC to generate the worst-case post-layout SDF using the script provided below. A similar script may be used to generate the best-case SDF. Obviously, you need to back annotate the best-case extracted numbers from the layout tool to generate the best-case SDF from DC. This solely depends on the layout tool and the methodology being used, DC script for worst-case post-layout SDF generation active_design = tap_controller read active_design + *.db" current_design active_design tink. set_operating_conditions WORST include capacitance.de /* actual parasitic capacitances */ read_timing rc_delays.sdf /* actual RC delays */ create_clock ~period 33 ~waveform {0 16.5} tck set_clock_skew ~propagated -minus_uncertainty 0.5. Ick set_driving_cell -cell BUFF1X -pin Z all_inputs() set_drive 0 {tck trst} set_load 50 all_outputs() set_input_delay 20.0 -clock tck-max all_inputs() set_output_delay 10.0-clock tck-max all_outputs() write_timing format sdf-v2.1_\ output active_design + *.sdf”SE It is recommended that formal verification be performed again between the source RTL and the final netlist, to check for any errors that may have been unintentionally introduced during the whole process. This is the final step; the design is now ready for LVS and DRC checks, before tape-out. 2.6 Chapter Summary ‘This chapter highlighted the practical side of the ASIC design methodology in the form of a tutorial. An example design was used to guide the reader from start to finish. At each stage, brief explanation and relevant scripts were provided ‘The chapter started with basics of setting up the Synopsys environment and technical specification of the example design. Further sections were divided into pre-layout, floorplanning and routing, and finally the post-layout steps. ‘The pre-layout steps included initial synthesis and scan insertion of the design, along with static timing analysis, and SDF generation for dynamic simulation, In order to. minimize the synthesis-layout iterations, the floorplanning and routing section stressed upon the placement of cells, with emphasis on back annotating to DC, the estimated delays extracted after global routing the design. The final section used post-layout optimization techniques to fix the hold-time violations, and to generate the final SDF for simulation. Finally, the application of formal verification method using Synopsys Formality was also included. This section did not contain any scripts, but the reader was made aware of the usefulness of formal techniques and where they are applied. BASIC CONCEPTS This chapter covers the basic concepts related to synthesis, using Synopsys suite of tools. These concepts introduce the reader to synthesis terminology used throughout the later chapters. These terms provide the necessary framework for Synopsys synthesis and static timing analysis. Although this chapter is a good reference, advanced users of Synopsys tools, already familiar with Synopsys terminology, may skip this chapter. 3.1 Synopsys Products This section briefly describes all relevant Synopsys products related to this book. ) Library Compiler b) Design Compiler and Design Analyzer ©) PrimeTime 4) Test Compiler ©) FormalityLibrary Compiler The core of any ASIC design is the technology library containing a set of logic cells. The library may contain functional description, timing, area and other pertinent information of each cell. Library Compiler (LC) parses this textual information for completeness and correctness, before converting it to a format, used globally by all Synopsys applications. Library Compiler is invoked by typing Ic_shell in a UNIX shell. All the capabilities of the LC can also be utilized within de_shell Design Compiler and Design Analyzer ‘The Synopsys Design Compiler (DC) and Design Analyzer (DA) comprise a powerful suite of logic synthesis products, designed to provide an optimal gate-level synthesized netlist based on the design specifications, and timing constraints. In addition to high-level synthesis capabilities, it also incorporates a static timing analysis engine, along with solutions for FPGA synthesis and links-to-layout (LTTL). Design Compiler is the command line interface of Synopsys synthesis tool and is invoked by typing de_shell in a UNIX shell. The Design Analyzer is the graphical front-end version of DC and is launched by typing design_analyzer. Design Analyzer also supports schematic generation, with critical path analysis through point-to-point highlighting, Although, beginners may initially prefer using DA, they quickly migrate to using DC, as they become more familiar with Synopsys commands. PrimeTime PrimeTime (PT) is the Synopsys sign-off quality, full chip, gate-level static timing analysis tool. In addition, it also allows for comprehensive modeling capabilities, often required by large designs. PT is faster compared to DC’s intemal static timing analysis engine. It also provides enhanced analysis capabilities, both textually and graphically. In contrast to the rest of Synopsys tools, this tool is Tel language based, therefore providing powerful features of that language to promote the analysis and debugging of the design. PT is a stand-alone tool and can be invoked as a command line interface or ‘graphically. To use the command line interface, type pt_shell in the UNIX window, or type primetime for the graphical version, Test Compiler The Test Compiler (TC) is the Synopsys test insertion tool that is incorporated within the DC suite of tools. The TC is used to inset DFT features like scan insertion and boundary scan, to the design. All TC commands are directly invoked from do_shell. Formality Formality is the Synopsys formal verification tool. This tool was introduced recently by Synopsys. It is fully compatible with all Synopsys formats. The tool features enhanced graphical debugging capabilities that include schematic representation of logic under verification, and visual suggestions annotated to the schematic as pointers of possible incorrect logic. It also provides suggestions for possible fixes to the design. 3.2. Synthesis Environment As with most EDA products, Synopsys tools require a setup file that specifies the technology library location and other parameters used for synthesis. Synopsys also defines its own format for storing and processing the information. This section highlights such details. 3.2.1 Startup Files There is a common startup file called “.synopsys_de.setup” for all tools of the DC family. A separate startup file is required for PT which is named “synopsys_pt.setup”. This file is in Tel format, and contains path information to the technology libraries and parameters required by PT. ‘The default startup files for both DC and PT reside in the Syropsys installation directory, and are automatically loaded upon invocation of these tools. These default files do not contain the design dependent data. Their function is to load the Synopsys technology independent libraries and otherparameters. The user in the startup files specifies the design dependent data, During startup, DC or PT reads these files in the following order: 1. Synopsys installation directory. 2. Users home directory. 3. Project working directory. The settings specified in the startup files, residing in the project working directory, override the ones specified in the home directory and so forth, ie. the configuration specified in the project working directory takes precedence over all other settings. It is up to the discretion of the user to keep these files wherever it is convenient. However, it is recommended that the design dependent startup files be kept in the working directory. ‘The minimum information required by Design Compiler, is the search_path, target_library, link library and the symbol _library. PT requires the Search_path and link_path information only. Typical startup files are shown in Example 3.1. Note the change in format between the DC and PT setup files. Example 3.1 Synopsys Setup Files DC .synopsys_de.setup file company = “Lucknow Technologies"; designer = “Clemson”; technology = “0.25 micron”; search_path = search_path+{ *." “/usr/goldendlibrary/std_cells" \ “Lusr/goldenslibrary/pads’} target_library {std_cells_tib.db} link_tibrary “", std_cells_lib.db, pad_lib.db} symbol_library = {std_cells_lib.sdb, pad_lib.sdb} PBR ADL S _ PT .synopsys_pt.setup file set search_path [list . /usr/goldendlibrary/std_cells\ Jusrigoldenvlibraryipads) set link path [list {"} std_cells_lib.db pad_lib.db] 3.2.2. System Library Variables [At this time, it is worth explaining the difference between the target_ibrary and the link_library system variables. The target library specifies the name of the technology library that corresponds to the library whose cells the designers want DC to infer and finally map to. The link_library defines the name of the library that refers to the library of cells used solely for reference, ice, cells in the link_library are not inferred by DC. For example, you may specify a standard cell technology library as the target_library, while specifying the pad technology library name and all other macros (RAMS, ROMS etc.) in the link_library list. This means that the user would synthesize the design that targets the cells present in the standard cell library, while linking to the pads and macros that are instantiated in the design. If the pad library is included in the target_library list, then DC may use the pads to synthesize the core logic. ‘The target library name should also be included in the link_library list, as shown in Example 3.1. This is important while reading the gate-level netlist, in DC. DC will not be able to link to the mapped cells in the netlist, if the target library name is not included in the link library list. For this case, DC generates a warning stating that it was unable to resolve reference for the cells present in the netlist. ‘The target_library and link_library system variables allow the designer to better control the mapping of cells. These variables also provide a useful ‘means to re-map a gate-level netlist from one technology to the other. In this case, the link_library may contain the old technology library name, while the target_library may contain the new technology library. Re-mapping can be performed by using the translate command in de_shell.ee The symbol_library system variable holds the name of the library, containing graphical representation of the cells in the technology library. It is used, to represent the gates schematically, while using the graphical front- end tool, DA. The symbol libraries are identified with a “sdb” extension. If this variable is omitted from the setup file, DA will use a generic symbol library called “generic.sdb” to create schematics. Generally, all technology libraries provided by the library vendor include a corresponding symbol library. It is imperative that there be a exact match of the cell names and the pin names, between the technology and the symbol library. Any mismatch in a cell will cause DA to reject the cell from the symbol library, and use the cell from the generic library. {It must be noted that DC uses the link_fibrary variable, whereas PT calls it the link_path. Apart from the difference in name and the format, the application of both these variables is identical. Since PT is a gate-level static timing analyzer, it only works on the structural gate-level netlists. Thus, PT. does not utilize the target_library variable. 33 Objects, Variables and Attributes Synopsys supports a number of objects, variables and attributes in order to streamline the synthesis process. Using these, designers can write powerful de_shell scripts to automate the synthesis process. It is therefore essential for designers to familiarize themselves with these terms 3.3.1 Design Objects ‘There are eight different types of design objects categorized by DC. These are: ~ Design: It corresponds to the circuit description that performs some logical function. The design may be stand-alone or may include other sub-designs. Although, sub-designs may be part of the design, itis treated as another design by Synopsys. — Cell: It is the instantiated name of the sub-design in the design. In Synopsys terminology, there is no differentiation between the cell and instance; both are treated as cell. — Reference: This is the definition of the original design to which the cell or thane refers Fr ., a leaf cell in the netlist must be referenced from the link library, which contains the functional description of the cell. Similarly, an instantiated sub-design (called cell by Synopsys) must be referenced in the design, which contains functional description of the instantiated sub-design. — Port: These are the primary inputs, outputs or 1O"s of the design. — Pin: It corresponds to the inputs, outputs or 10"s of the cells in the design (Note the difference between port and pin) — Net: These are the signal names, ie., the wires that hook up the design together by connecting ports to pins and/or pins to each other. = Clock: The port or pin that is identified as a clock source, The identification may be intemal to the library or it may be done using de_shell commands. = Library: Corresponds to the collection of technology specific cells that the design is targeting for synthesis; or linking for reference. 33.2 Variables Variables are placeholders used by DC for the purpose of storing information. The information may relate to instructions for tailoring the final netlist, or it may contain user-defined value to be used for automating the synthesis process. Some variables are pre-defined by DC and may be used by the designer to obtain the current value stored in the variable. For eg., the variable called “de_shell_status” has a special meaning to DC, while “captain_picard” has no meaning to DC. The latter may be used to hold any user-defined value for scripting purposes.All variables are global and last only during the session. They are not saved along with the design database. Upon completion of the dc_shell session, the value of the variables is lost. Most de_shell variables have a default value associated with them, which they inkerit at the start of the session. For instance, the following variable uses IEEE.std logic 1164 as its default value and you need to specify additional values (packages in this case) for DC to write out the “USE” clause with the specified name in the synthesized VHDL netlist. For example: dc_she11> vhdlout_use_packages = { library IEEE.std_logic_1164: \ library STD_LIB; } Users may use variables to gather useful information about the design. Example 3.2 provides a de_shell script, aliased as reg_count, which may be used to find the number of flops per clock domain. To use the following script, the clock name in the design should first be assigned to the variable named CLK. Example 3.2 alias reg_count “list CLK > /devinull; \ if(dc_shell_status \ {echo “Set the CLK variable for a chosen clock”;}_\ else {create_clock CLK ~name CLK ~period 100; \ remove_variable count > /dev/null; \ 0 > /devinull \ foreach(reg, all_registers (-clock CLK)){ —\ count = count + 1; \ ‘ \ echo “Number of flops for the chosen clock is: “count; \ y A list of all DC variables may be obtained by using the following de_shell ‘command: list -variables all Variables may be removed from DC by using the following dc_shell ‘command: remove_variable
‘The variables used by PT differ from those of DC, in syntax and application. PT uses the Tel based syntax, while DC uses its own. The applications of the PT variables are based on Tel constructs. Usage of PT variables through Tel is described in detail in Chapter 11. 3.3.3 Attributes Attributes are similar in nature to variables. Both store information. However, attributes store information on a particular design object such as nets, cells or clocks, Generally, attributes are pre-defined and have special meaning to DC, though designers may set their own attributes if desired, For e.g, the set_dont_touch is a pre-defined attribute, used to set a dont_touch on a design, thereby disabling DC optimization on that design, Attributes are set on and retrieved from the design object by usirg the following commands: set_attribute
get_attribute
For example, one may use the following command to find the max_transition value set in the library (called “STD_LIB”) and the area attribute set for an inverter (INV2X) in that library: dc_shel1> get_attribute STD_LIB default_max_transitiondc_she11> get attribute STD_LIB/INV2X area The “help attributes” command in de_shell provides a complete listing of attributes used by DC. 3.4 Finding Design Objects One of the most useful commands provided by DC is the find command. Sometimes, it becomes necessary to locate objects in de_shell for the purpose of scripting or automating the synthesis process. The find command is used to locate a list of designs or library objects in DC. find
hierarchy The pe field refers to the object type: design, port, reference, cell, clock, net, pin or library. The name list is an optional argument and as the term “name” suggests, is used to specify the design or library object present in de_shell. If omitted, all objects of the specified type are returned. One of the ‘most important arguments of the find command is the —hierarchy option. This is used to force the find command to traverse the whole hierarchy in search of the specified object. This command is also useful in explicitly identifying the object type (net, Port ete.) and name, to which a particular constraint or attribute is being applied. For instance, consider a design that contains a port and a net having the same name, say “enterprise”. Annotating a value, using the set_load command on “enterprise” causes a problem, because the name “enterprise” does not specify a unique object. To resolve this conflict, the user may use the find command in conjunction with the set_load command, to explicitly ‘identify the object type the set_load is be applied to, as follows: de_shel1> set_load 2.0 _find{net, “enterprise") ‘The usefulness of the find command is highlighted by its ability to accept wildcard characters, in case the user does not know the full name of the object. This feature makes this command extremely powerful, when used for to aplya el dort touch aute onal sub Ssgns Whose rome bein with subA_and subB_, as follows: dc_she11>set_dont touch find( design, {subA_* subB_*} ) The following examples further illustrate the usage of this command: dc_shel1> find port © Lists all the ports of the current_design. de_shel1> set_dont_touch_network find( net, {'gen_cik”, “scan’} ) © Applies the dont_touch_network attribute on the specified nets. dc_she11> remove_attribute find(design, “*" —hierarchy) dont_touch © Removes the dont_touch attribute from the whole design throughout the hierarchy dc_she11> find (pin, stdcells_lib/DFF1/*) © Lists all the pins of DFFI cell present in the library called stdeells_lib. Similar to DC, PT also provides this capability, albeit differently. PT uses the get_* commands to find objects in a design. The objects to be found are ‘encompassed within the get command. For example: get_ports, get_nets, get_designs, get_lib_cells, get_cells etc. AA fall list of get commands is listed in the PrimeTime User Guide. 3.5 Synopsys Formats Most Synopsys products support and share, a common internal structure, called the “db” format. The db files are the binary compiled forms representing the text data, be it the RTL code, the mapped gate-level designs,or the Synopsys library itself. The db files may also contain any constraints that have been applied to the design. In addition, all Synopsys tools understand the following formats of HDL. DC is capable of reading or writing any of these formats, L. Verilog 2. VHDL. 3. EDIF Today, Verilog and VHDL are the two main HDLs in use, for coding a design. EDIF (Electronic Design Interchange Format) is primarily utilized for porting the gate level netlist, from one tool to another. EDIF was a Popular choice a few years back. However, recently Verilog has gained popularity and dominance prompted by its simple to read format and description. Most of the EDA tools today, support both Verilog and EDIF. VHDL in general is not used for porting the netlist from one vendor tool to ‘another, since it requires the use of IEEE packages, which may vary between different tools. This language is essentially used for the purpose of coding the design and system level verification. 3.6 Data Organization It is a good practice to organize files according to their formats, This facilitates automating the synthesis process. A common practice is to organize them using the following file extensions.
.scr
.v
.sv
.vhd
.svhd Script files: RIL Verilog file: ‘Synthesized Verilog netlist: RTL VHDL file: Synthesized VHDL netlist: EDIF file:
edf Synopsys database file:
.db Reports: -filename> spt Log files:
.log 3.7 Design Entry Before synthesis, the design must be entered in DC in the RTL format (although other formats also exist). DC provides the following two methods of design entry: a) read command b) analyze & elaborate commands Synopsys initially introduced the read command, which was then followed by the analyze/elaborate commands. The latter commands for design entry provide a fast and powerful method over the read command and are recommended for RTL design entry. The analyze and elaborate commands are two different commands, allowing designers to initially analyze the design for syntax errors and RTL. translation before building the generic logic for the design. The generic logic or GTECH components are part of the Synopsys generic technology independent library. They are unmapped representations of boolean functicns and serve as place holders for the technology dependent library. ‘The analyze command also stores the result of the translation in the specified design library (UNIX directory) that may be used later, For example, a design analyzed previously may not need re-analysis and can merely be elaborated, thus saving time. Conversely, the read command performs the function of both analyze and elaborate commands but dees not store the analyzed results, therefore making the process slow in comparison. Parameterized designs (such as usage of generic statement in VHDL) must use analyze and elaborate commands in order to pass required parameters, while elaborating the design. The read command should be used for entering pre-compiled designs or netlists in DC. The following table lists major differences between the read and analyze/elaborate commands for various categories:Table 3-1 Difference between analyzelelaborate and read commands os eae wat Tatas Ta as ‘Santhay tow ara | etc dete a cea, or ee a os com | Snmenrcntstctng | armas Aa TADT | mip | CR Wiooteamact” | gumptancamenl In contrast to DC, PY uses different commands for design entry. PT, being a static timing analyzer, only works on the mapped structural netlists, The design entry commands used by PT are described in Chapter I1. 3.8 Compiler Directives Sometimes it is necessary to control the synthesis process from the HDL source itself. This control is primarily needed because of differences that may exist between the synthesis and the simulation environments. Other times, the control is needed simply to direct DC to map to certain types of components; or for embedding the constraints and attributes directly in the HDL source code, DC provides a number of compiler directives targeted specifically for Verilog and VHDL design entry formats. These directives provide the means to control the outcome of synthesis, directly from the HDL source code. The directives are specified as “comments” in the HDL code, but have specific ‘meaning for DC. These special comments alter the synthesis process, but have no effect on the simulation. The following sub-sections describe some of the most commonly used Girectives, both for Verilog and VHDL formats. For a complete list of directives, users are advised to refer to the Design Compiler Reference Manual. 3.8.1 HDL Compiler Directives ‘The HDL compiler directives refer to the translation process of RTL in Verilog format to the internal format used by Design Compiler. As stated above, specific aspects of the translation are controlled by “comments” within the Verilog source code. At the beginning of each directive is the regular Verilog comment // or /* followed by the keyword “synopsys” (all in lower case). Generally, users prefer the former style to specify HDL compiler directives. Therefore, to keep it simple, only // style of comments for HDL directives are discussed in this section. All comments beginning with // synopsys are assumed to only contain HDL compiler directives. DC displays an error if anything apart from HDL compiler directives (say other comments or parts of Verilog code) are present after the // synopsys statement. 3.8.1.1 translate_off and translate_on Directives ‘These are some of the most useful and frequently used directives. They provide the means to instruct DC to stop translation of the Verilog source code from the start of *// synopsys translate_off”, and start the translation again after it reaches the next directive, “/ synopsys translate_on”, These directives must be used in pairs, with the translate_off directive taking the lead. Consider a scenario where parts of the code present in the source RTL is meant solely for the purpose of dynamic simulation (or maybe the test tench is structured to make use of these statements). Example 3.3 illustrates such a scenario, which contains the Verilog ‘ifdef statement to facilitate setting parameters at the command line during simulation. Such a code is clearly un- synthesizable, since the VENDOR_ID depends on the mode specified during simulation. Furthermore, since the HDL compiler cannot handle thisstatement, it issues an error stating that the design could not be read due to “Undefined macro ‘ifdef ....” Example 3.3 ifdef MY_COMPANY define VENDOR ID 1670083 “else “define VENDOR_ID 16°h0036 “endif The translate_off and translate_on HDL directives may be used in this cease to bypass the “simulation only” parts of the verilog code as illustrated in Example 3.4. The resulting logic will contain the VENDOR_ID values pertaining to MY_COMPANY only. To change it to the other value, the user hhas to edit the code and move the HDL directives to make the other VENDOR_ID value visible. Example 3.4 II synopsys translate_off ‘ifdef MY_COMPANY 11 synopsys translate_on “define VENDOR_ID 16°h0083 11 synopsys translate_off “else “define VENDOR_ID 16°0036 “endif 1 synopsys translate_on 3.8.1.2 _parallel_case and full_case Directives Unlike VHDL — where all conditions within the case statement are required to be mutually exclusive ~ the use of case statement in Verilog provides different choices, and may be used to infer a priority encoder. In some cases, the priority encoder is not desired, but one may still want to use the case statement to infer a multiplexer. The // synopsys parallel_case direc:ive, forces DC to generate the multiplexer logic, instead of building the priority encoder logic. This directive is generally used for coding a state machine, driven by the fact that a state machine cannot exist in more than one state at a given time. ‘This directive forces DC to evaluate all case conditions in parallel, with execution for all conditions evaluating to true at once. If care is not exercised, this may result in a mismatch between simulation and synthesized logic. To avoid this problem and to make the RTL code more robust, itis suggested that this directive be used when the user: knows that only one condition will be evaluated to true at a given time, ‘The parallel_case directive should be used immediately following the case statement as shown in example 3.5. Example 3.5 always @ (es_state) begin case (es_state) // synopsys parallel_case 2°00: ns=2°bO1; 2bOL: 2°b10; 2°10: 2°b00; default: ns = 2°b00; endease end The full_case directive is also specific to Verilog HDL. VHDL requires that all conditions in the case statement be covered. In contrast, Verilog because of its “openness” - does not require all conditions of the case statement to be covered. Although, Verilog does provide, a default clausethat is used to cover the rest of the conditions, designers sometimes forget to include this (which is totally acceptable by the Verilog compiler, since it is not a requirement). This causes DC to infer unwanted latches, due to non- assignment of values to the register or net, under all possible conditions. To prevent this, the // synopsys full_case directive should be used, which informs DC, that all conditions are covered. The II synopsys full_case directive is also used just after the case statement, as shown in Example 3.6. Notice that the default clause is omitted within the case statement. Example 3.6 always @ (SEL or Al or A2) begin case (SEL) // synopsys full_case 200: Z=Al; QbOl: © Z=A2: 2b10: Z=AL&A2; endcase end 3.8.2 VHDL Compiler Directives Similar to the HDL compiler, the VHDL compiler directives are special VHDL comments that affect the actions of the VHDL compiler. All VHDL. ‘compiler directives start with the VHDL comment (--), followed either by synopsys or pragma statements. This provides a special meaning to the ‘compiler and compels it to perform specified task. 3.8.2.1 translate_off and translate_on Directives ‘These directives work in the similar fashion as the ones described previously for the HDL compiler, with the exception that these require the VHDL comments as follows: sia -- synopsys translate_off -- synopsys translate_on -- pragma translate_off -- pragma translate_on ‘The VHDL compiler ignores any RTL code between the translate_offion directives, however it does perform a syntax check on the embedded code. In order to refrain the compiler from conducting syntax checks; the code must be made completely transparent. This can be achieved by setting the following variable to true: hdlin_translate_off_skip_text = true These directives are primarily used to block simulation specific constructs in the VHDL code. For example, the user may have a library statement present in the netlist, which specifies the name of the library that contains the VITAL, models of the gates present in the netlist. This means that for the purpose of simulation, the gates present in the netlist are being referenced from this, library. Upon reading the VHDL code, DC produces an error, since the library statement is specific to simulation only. To circumvent this problem, ‘one may envelop the library statement with the above directives to force DC to completely ignore the library statement. 3.8.2.2 synthesis_off and synthesis_on Directives The synthesis_off/on directives work in a manner similar to the translate_on/off directives. The behavior of synthesis_offion directives itself, is not affected by the value of the hdlin_translate_off_skip_text variable. However, the translate_off/on directives perform exactly the seme function if the value of the variable specified above is set to false. ‘The above directives are the preferred approach to hide the simulation only constructs. Though, the VHDL compiler performs the syntax checks of the code present within these directives, it ignores the code for the purpose of synthesis.eueas a ‘Syntactically, these variables may be used as follows: ~ pragma synthesis_off
~ pragma synthesis_on 3.9 Chapter Summary This chapter introduced the reader to various terminology and concepts used by Synopsys. Starting from a brief description and purpose of some of the tools provided by Synopsys, the chapter covered the Synopsys environment that included examples of startup files needed, both for DC and PT, followed by the concepts of Objects, Variables and Attributes. A brief introduction was also provided for the find command and its usefulness. Different Synopsys formats were discussed along with Design entry methods. The advantages and disadvantages of using read versus analyze/elaborate command were also covered. Finally, the chapter concluded by describing some of the most useful directives used by DC for the purpose of hiding simulation only constructs. Throughout the chapter, various examples were provided to facilitate the user in understanding these concepts. SYNOPSYS TECHNOLOGY LIBRARY Synopsys technology library format has almost become the de-facto library standard. Its compact yet informative format allows adequate representation of the deep sub-micron technologies. The popularity of the Synopsys library format is evident from the fact that most place and route tools provide a direct translation of the Synopsys libraries, with almost a one-to-one ‘mapping between the timing models in Synopsys libraries, and the place and route timing models, A basic understanding of the library format and delay calculation methods is the key for successful synthesis. Designers usually do not concem themselves with full details of the technology library as long as the library contains a variety of cells, each with different drive strengths. However, in order to optimize the design successfully, itis essential for designers to have a clear understanding of the delay calculation method used by DC along with the wire-loed modeling and cell descriptions. It is therefore, the intent of this chapter to describe the Synopsys technology library from the designer's perspective, rather than discussing details about the structural and functional syntax of the library.4.1 Library Basics 8) Library group a rbrary level attributes ©) Environment deseriptioy d) Cell description a cifies the name ; of the library, followed by an berneen the seo Prete last entry in the library file. Anything description "=e%, is part of the entire library group library (€x25) { /* stary oftbrary*/ < library description 5 3/* end of library +/ library (ex25) { technology (cmos) ; delay_model date : "Feb 29, 2000" revision “1.0"; table_lookup ; current_unit time_unit voltage_unit pulling_resistance_unit capacitive_load_unit (1.0, pf) ; default_inout_pin_cap :1.5; default_input_pincap : default_output_pin_cap :0.0; default_max_fanout — : 10.0 default_max transition :3.0; default_operating_conditions : NOMINAL in_place_swap_mode — : match_footprint ; 4.1.3. Environment Description Environment attributes are defined in the library to model the variations of temperature, voltage and manufacturing processes. These consist of scaling factors (derating), timing range models and operation conditions. In addition, the environment description also contains wire-load models that are used by DC to estimate interconnect wiring delays. 4.13.1 Scaling Factors The scaling factors or K-factors are multipliers that provide means for derating the delay values based on the variations in process, voltage and temperature, or simply PVT. Only some of the K-factor statements are shown below as an example. Please refer to the library compiler reference manual for full details.SE k_process_fall_transition : 1.0; k_process_rise_transition : 1.2; k_process_fall_propagation : 0.4; k_process_rise_propagation : 0.4 ; k_temp_fall_transition k_temp_rise_transition k_temp_fall_propagation k_temp_rise_propagation k_volt_fall_transition k_volt_rise_transition k_volt_fall_propagation k_volt_rise_propagation 4.1.3.2 Operating Conditions Sets of operating conditions defined in the library specify the process, temperature, voltage and the RC tree model. These are used during synthesis and timing analysis of the design. A library is characterized using one set of operating conditions. During synthesis or timing analysis, if another set of operating conditions is specified, then DC uses the K-factors to derate the delay values based upon the specified operating conditions. Library developers may define any number of operating conditions in the library. Typically the following operating conditions are defined in the technology library: operating_conditions (WORST) { process 21.3; temperature : 100.0; voltage 275; tree_type —_: worst_case_tree ; } operating_conditions (NOMINAL) { process 21.0; temperature : 25.0; voltage 13.00; tree_type _: balanced_tree ; operating_conditions (BEST) { process 07; temperature :0.0; voltage 23.25; tree type —_: best_case_tree ; } ‘The process, temperature and voltage attributes have already been explained previously. The tree_type attribute defines the environmental interconnect model to be used. DC uses the value of this attribute to select the appropriate formula while calculating interconnect delays. The worst_case_tree attribute models the extreme case when the load pin is at the most distant end of a net, from the driver. In this case the load pin incurs the full net capacitance and resistance. The balanced_tree model uses the case where all load pins are on separate and equal interconnect wires from the driver. The load pin in this case, incurs an equal portion of net capacitance and resistance. The best_case_tree models the case where the load pin is sitting right next to the driver. The load pin incurs only the net capacitance, without any net resistance. 4.1.33 Timing Range Models The timing range models provide additional capability of computing arrival times of signals, based upon the specified operating conditions. This capability is provided by Synopsys to accommodate the fluctuations in operating conditions for which the design has been optimized. DC uses the timing ranges to evaluate the arrival times of the signals during timing analysis. timing_range (BEST) { faster_factor : 0.5; slower_factor : 0.6 ; } timing_range (WORST) { faster_factor : 1.2; slower_factor : 1.3;4.1.3.4 Wire-Load Models ‘The wire_load group contains information that DC utilizes to estimate interconnect wiring delays during the pre-layout phase of the design. Usually, several models appropriate to different sizes of the logic are included in the technology library. These models define the capacitance, resistance and area factors. In addition, the wire_load group also specifies slope and fanout_length for the logic under consideration, The capacitance, resistance and area factors represent the wire resistance, capacitance and area respectively, per unit length of interconnect wire. The fanout_length attribute specifies values for the length of the wire associated with the number of fanouts. Along with fanout and length, this attribute may also contain values for other parameters, such as average_capacitance, standard_deviation and number_of_nets. These attributes and their values are written out automatically, when generating wire-load models through DC. For manual creation, only the values for fanout and length are needed, using the fanout_length attribute, For nets exceeding the longest length specified in the fanout_length attribute, the slope value is used to linearly interpolate the existing fanout_length value, in order to determine its value. wire_load (SMALL) { resistance: 0.2 capacitance: 1.0; area 20 slope 20.5; fanout_length( 1, 0.020 ); fanout_length( 2, 0.042 ); fanout_length( 3, 0.064 ): fanout_length( 4, 0.087 ); fanout_length(1000, 20.0 ) ; wire_load(MEDIUM) { resistance: 0.2; capacitance: 1.0; area slope 11.0; fanout_length( 1, 0.022); fanout_length( 2, 0.046 ); fanout_length( 3, 0.070 ); fanout_length( 4, 0.095 ); fanout_length(1000, 30.0 ); } wire_foad(LARGE) { resistance :0.2; capacitance : 1.0; area i slope 1.5; fanout_ength( 1, 0.025 ); fanout_length( 2, 0.053 ) ; fanout_length( 3, 0.080 ); fanout_length( 4, 0.110); fanout_length(1000, 40.0 ); } In addition to the wire_load groups, other attributes are defined in the library to automatically select the appropriate wire_load group, based on the total cell area of the logic under consideration. wire_load_selection(AUTO_WL) { wire_load_from_area ( 0, 5000, “SMALL" ) ; wire_load_from_area ( 5000, 10000, “MEDIUM") ; wire_load_from_area (10000, 15000, "LARGE" ) ; } default_wire_load_selection : AUTO_WL ; default_wire_load_mode _: enclosed ;It is recommended that the value of the default_wire_load_mode be set to “enclosed” or “segmented” instead of “top”. The wire load modes and their application are described in detail in Chapter 6. 4.14 — Cell Description Each cell in the library contains a variety of attributes describing the function, timing and other information related to each cell. Rather than going into detail and describing all the attributes possible, only the relevant attributes and related information useful to designers are shown in the example below: cell (BUFFDO) { area: 5.0; pin (Z) { ‘max_capacitance : 2. max_fanout : 4.0 ; function : “I”; direction : output ; timing () { } timing () { } related_pin } pin (I) { direction: input; capacitance : 0.04 ; fanout_load : 2.0; max_transition : 1.5 ; The area attribute defines the cell area as a floating-point number without any units followed by pin description and their related timing. In addition, several design rule checking (DRC) attributes may be associated with each pin of the cell. These are: fanout_load attribute for input pins. max_fanout attribute for output pins. max_transition attribute for input or output pins. max_capacitance attribute for output or inout pins ‘The DRC conditions are based on the vendor's process technology and should not be violated, The DRC attributes define the conditions in which the cells of the library operate safely. In other words, cells are characterized under certain conditions (output loading, input slope etc.). Designs violating these conditions may have a severe impact on the normal operation of the cells, thereby causing the fabricated chip to fai Even though, the previous example contains all four attributes, generally only two are used. In most cases, either the fanout load along with max_fanout, or max_transition with max_capacitance are used, The fanout_load and max_fanout DRC attributes are related to each other, in such that the max_fanout value at the output of the driver pin cannot exceed the sum of all fanout_load values at each input pin of the driven cells. Consider the cell (BUFFDO) shown in the previous example. This cell contains max_fanout value of 4.0 associated to the output pin Z, while the fanout_load value at its input is 2.0. This cell therefore, cannot drive more than two of its own kind (BUFFDO) cells, since max_fanout (4) = fanout_load (2) of 1" cell + fanout_load (2) of 2™ cell If the DRC violations occur, then DC replaces the driving cell with another that has a higher max_fanout value ‘The max_transition attribute is generally applied to the input pin, whereas the max_capacitance is applied to the output pin. Both attributes perform the similar function as the max_fanout and fanout_load attributes. Thedifference being, that the max_transition attribute defines that any net that has a transition time greater than the specified max_transition value of the load pin, cannot be connected to that pin. The max_capacitance at the output pin specifies that the output pin of the driver cell cannot connect to any net that has the total capacitance (interconnect and load pin capacitance) greater than, or equal to the maximum value defined at the output pin, If DRC violations occur, then DC replaces the driving cell with another that has a higher max_capacitance value. In addition, the output pin contains attributes defining the function of the pin, and the delay values related to the input pin. The input pin defines its’ pin capacitance and the direction, The capacitance attribute should not be confused with the max_capacitance attribute. DC uses the capacitance attribute to perform delay calculations only, while the max_capacitance, as explained above, is used for design rule checking. It is also worthwhile to mention here that for sequential cells, the clock input ppin uses another attribute (clock : true) that specifies that the input pin is of type “clock”. More details can be found in the Library Compiler Reference Manual The cell’s DRC attributes are often the most criticized part of the cell library. Library developers often find it impossible to satisfy everyone and are often blamed for not implementing the “right” numbers for these attributes. The problem is caused because the library, to a certain extent is dependent upon the coding style and chosen methodology. What works perfectly for one design may produce inadequate results for another design. It is therefore the intent of this section to briefly explain the solutions that designers may use to tailor the library to suit their needs In order to accommodate the design requirements, itis possible to change the values of the above DRC attributes on a per cell basis. However, it must be noted that the DRC attributes set in the library can only be tightened, they cannot be loosened. This can only be done, if the attributes are pre-specified in the cell description. Users should realize that if these attributes are not already specified on the pin of the cell in the technology library, it is not be Possible to add these attributes on the pin, from de” shell For instance, to change the max_fanout value specified for pin Z of cell BUFFDO (of library ex25 described previously), from 4.0 to 2.0; the following de_shell command may be used: c_she11> set_attribute find(pin, ex25/BUFFDO/Z) max_fanout 2.0 One may also use wildcards in the above command to cover a variety of cells. This is useful for cases where a global change is required. For exemple, users may use the following command to change the max_fanout value on all cells with 0 drive strengths in the technology library: de_shel1> set_attribute find(pin, ex25/*DO/Z) max_fanout 2.0 Similarly, the set_attribute command may be used to alter the value of other DRC attributes. The above command may be specified in the -synopsys_de.setup file for global implementation. 42 Delay Calculation Synopsys supports several delay models. These include the CMOS generic delay model, CMOS piecewise linear delay model and the CMOS non-linear table lookup model. Presently, the first two models are not in common use, due to their inefficiencies in representing the true delays caused by VDSM geometries, The non-linear delay model is the most prevalent delay model used in the ASIC world 4.2.1 Delay Model ‘The non-linear delay model (NLDM) method uses a circuit simulator to characterize a cell’s transistors with a variety of input slew rates, and cutput load capacitances. The results form a table, with input transition and cutput load capacitance as the deciding factor for calculating the resultant cell delay. Figure 4-1, shown below depicts the resulting delays and slew rates, interpolated to produce a non-linear delay model. The model’s accuracyEES depends on the precision and range, of the chosen input slew rates and load capacitances, et olay utp Copactance Figure 4-1. NLDM Table If the delay number falls within the square (table in the library), then the delay is computed using interpolation techniques, The values of the surrounding four points are used to determine the delay value, using numerical methods. The problem arises, when any of the parameters fall outside the table, DC is best designed to extrapolate the resulting delay, but often ends up with an extremely high value. This may be a blessing in (OPIN SES EE ADINDELADS TE EAI i disguise, since a high value is easily noticeable during static timing analysis, providing designers an opportunity to correct the situation. 42.2 Delay Calculation Problems The delay calculation of a cell is performed using the input transition time and the capacitive loading seen at the output. The input transition time of a cell is evaluated based upon the transition delay of the driving cell (previous cell). If the driving cell contains more than one timing arc, then the worst transition time is used, as input to the driven cell. This directly impacts the static timing analysis and the generated SDF file for a design. Consider the logic shown in Figure 4-2. The signals, reset and signal_a are inputs to the instance Ul. Let’s presume that the reset signal is non critical as compared to signal_a. The reset signal is a slow signal, therefore, the transition time of this signal is high as compared to signal_a. This causes two transition delays to be computed for cell Ul (2 ns from A to Z, and 0.3 ns from B to Z). When generating SDF, the two values will be written out separately as part of the cell delay, for the cell Ul. However, the question now arises, which of the two values does DC use to compute the input transition time for cell U2? DC uses the worst (maximum) transition value of the preceding gate (U1) as the input transition time for the driven gate (U2). Since the transition time of reset signal is more compared to signal_a, the 2ns value will be used as input transition time for U2. This causes a large delay value to be computed for cell U2 (shaded cell). reset signal_a Figure 4-2. Delay Computationinl ne To avoid this problem one needs to inform DC, not to perform the delay calculation for the timing are - pin A to pin Z of cell Ul. This step should be performed before writing out the SDF. The following de_shell command may be used for this purpose: dc_shel1> set_disable_timing U1 -from A -to Z Unfortunately, this problem also arises during static timing analysis. Failure to disable the timing computation of the false path leads to large delay values computed for the driven cell. 4.3 Whatis a Good Library? Cell libraries determine the overall performance of the synthesized logic. A good cell library will result in fast design with smallest area, whereas a poor library will degrade the final result. Historically, the cell libraries were schematic based. Designers would choose the appropriate cell and connect them manually to produce a netlist for the design. When the automatic synthesis engines became prevalent, the same schematic based libraries were converted and used for synthesis. However, since the synthesis engine relies on a number of factors for optimization, this approach almost always resulted in poor performance of the synthesized logic. It is therefore imperative that the cell library be designed catered solely towards the synthesis approach. The following guidelines outline, the specific kind of cells in the technology library desired by the synthesis engine. a) A variety of drive strengths for al cells. ») Larger varieties of drive strengths for inverters and butfers. ©) Cells with balanced rise and fall delays (used for gated clocks). 4) Same logical function and its inversion as separate outputs, within the same physical cell (e.g., OR gate and NOR gate, as a single cell), again with a variety of drive strengths. ) Same logical function and its inversion as separate cells (e.g., AND gate and NAND gate as two separate cells), with a variety of drive strengths. 1) Complex cells (e.g., AOI, OAI or NAND gate with one input inverted etc) with a variety of high drive strengths. 2) High fanin cells (e.g., AOI with 6 inputs and one output) with a range of ifferent drive strengths. h) Variety of flip-flops with different drive strengths, both positive and negative-edge triggered. i) Single or Multiple outputs available for each flip-flop (e.g., Q only, or QN only, or both), each with a variety of drive strengths. 4) Flops to contain different inputs for Set and Reset (e.g., Set only, Reset only, no Set or Reset, both Set and Reset). k) Variety of latches, both positive and negative-edge enabled each with different drive strengths. }) Several delay cells. These are useful when fixing the hold-time violations. Using the above guideline will result in a library optimized to hancle the synthesis algorithm. This provides DC with the means to choose from a variety of cells to implement the best possible logic for the design. It is worthwhile to note that the usage of high fanin cells, although useful in reducing the overall cell area, may cause routing congestion, which may inadvertently cause timing degradation, and/or increase in the area of the routed design. It is therefore recommended that these cells be used with caution.Some designers prefer to exclude the low drive strengths for high fanin cells from the technology library. This is again is based on the algorithm used by the routing engine and the type of placement (timing driven etc.) used by designers. If the router is not constrained, then it uses a method by which it associates @ weight to each net of the design while placing cells. Depending upon the weight of the net, the cells are pulled towards the source having the highest weight. High fanin cells have a larger weight associated to its inputs (because of the number of inputs) compared to the weight associated with their outputs (single output). Therefore, the router will place these cells near the gates that are driving it. This will result in the high fanin cell being pulled away from the cell it is supposed to be driving, causing a long net to be driven by the high fanin cell, If the high fanin cell is not strong enough to drive this long net (large capacitance) then the result will be the computation of large cell delay for the high fanin cell, as well as the driven gate (because of slow input transition time). By eliminating the low drive strengths of the high-fanin cells from the technology library, this problem can be prevented after layout. 44° Chapter Summary To summarize, this chapter described the contents of the Synopsys technology library from the designer’s perspective. The emphasis was placed ‘upon the correct usage and understanding of the library, rather than focusing, on details that are relevant only to library developers. The chapter started with basics of the technology library, with separate groups within the library. The relevant portions of each group were explained in detail. This included explanation of all attributes that the library uses to perform its task. Special emphasis was given to describing the delay calculation method, along with operating conditions, wire-load modeling and cell description. At cach step, problems associated and workarounds were explained in detail. Finally, suggestions were provided to the user as to what constitutes a good library optimized for synthesis engine. This includes helpful hints by taking into account the router behavior of the layout tool PARTITIONING AND CODING STYLES Successful synthesis depends strongly on proper partitioning of the design, together with a good HDL coding style. Logical partitioning is the key to successful synthesis (and place and route, if layout is hierarchical). Traditionally, designers partitioned the design in accordance with the functionality of each block, giving no thought to the synthesis process. As a result of incorrect partitioning, the inflexible boundaries degrade the synthesis results, which makes optimization difficult. Partitioning the design correctly can significantly enhance the synthesized result. In addition, reduced compile time and simplified script management is also achieved. A good coding style is imperative, not only for the synthesis process, but also for easy readability of the HDL code. Today, many designers only stress verifying the functionality of the design. Driven by time restriction and/or lack of communication between the team members, designers do not have the luxury of carefully scrutinizing the HDL coding style. The fact remains, however, that a good coding style not only results in reduction of chip area and aids in top-level timing, but also produces faster logic.OE 5.1 Partitioning for Synthesis Partitioning can be viewed as, utilizing the “Divide and Conquer" concept to reduce complex designs into simpler and manageable blocks. Promoting design reuse is one of the most significant advantages to partitioning the design. tioned Apart from the ease in meeting timing constraints for a properly p: design, itis also convenient to distribute and manage different blocks of the design between team members. The following recommendations achieve best synthesis results and reduction in compile time. a) Keep related combinational logic in the same module. b) Partition for design reuse. ©) Separate modules according to their functionality. 4) Separate structural logic from random logic. €) Limit a reasonable block size (perhaps, maximum of 10K gates per block) 4) Partition the top level (separate /O Pads, Boundary Scan and core logic). 18) Do not add glue-logic at the top level. h) Isolate state-machine from other logic. i) Avoid multiple clocks within a block. 4) Isolate the block that is used for synchronizing multiple clocks. k) WHILE PARTITIONING, THINK OF YOUR LAYOUT STYLE. ‘The group and ungroup commands provide the designer with the capability of altering the partitions in DC, after the design hierarchy has already been defined by the previously written HDL code. Figure 5-1, illustrates such an action. To D—§| D_, > Before Grouping Before Ungrouping ‘After Ungrouping Figure 5-1. Changing Paritions ‘The group command combines the specified instances into a separate block. In Figure 5-1, instances U1 and U2 are grouped together to form a sub-block named subl,, using the following command, dc_she11> current_design top dc_shel1> group {U1 U2} —design_name sub1‘The ungroup command performs the opposite function. It is used to remove the hierarchy, as shown in Figure 5-1, by using the following command. dc_shel1> current_design top dc_she11> ungroup ~all The designer can also use the ungroup command alon, 1g with the “flatten and all options to flatten the entre hierarchy. This is illustrated below: dc_she11> ungroup ~flatten -all 5.2. What is RTL? Today, RTL or the Register Transfer Level is the most popular form of high- level design specification. An RTL description of a design describes the design in terms of transformation and transfer of logic from one register to another. Logic values are stored in registers where they are evaluated through some combinational logic, and then re-stored in the next register, . RTL functions like a bridge between the software and hardware. It is text with strong graphical connotations — text that implies graphics or structure. It can be described as technology independent, textual structural description, similar to a netlist. " 5.2.1 Software versus Hardware A frequent obstacle to writing HI i 1g HDL code is the software mind-set. HDLs ia evolved from logic netlist representations. HDLs in their initial form (the Register Transfer Level) were a forum to represent logic in a format independent from any particular technology library. A higher level of HDL aa is the behavioral level that allows the design to be independent of ming and explicit sequencing. Lately, many system designers hi to HDLs to describe the full system. 2 ae Frequently, the expectation is that the synthesis tool will synthesize the HDL, to the minimal area and maximum performance, regardless of how the HDL. is written. The problem remains that at high level there are numerous ways of writing code to perform the same function. For example, a conditional expression could be written using case statements or if statements. Logically, these expressions are responsible for performing the same task, but when synthesized they can give drastically different results, as far as type of logic inferred, area, and timing are concemed. A reasonable caveat told to recent adopters of synthesis is ~ THINK HARDWARE! 5.3 General Guidelines ‘The following are general guidelines that every designer should be aware of. There is no fixed rule to adhere to these guidelines, however, following them vastly improves the performance of the synthesized logic, and produces a cleaner design that is well suited for automating the synthesis process. 53.1 Technology Independence HDL should be written in a technology independent fashion. Hard-coded instances of library gates should be minimized. Preference should be given to inference rather than instantiation. The benefit being that the RTL code can be implemented with any ASIC library and new technology through re- synthesis. This is especially important for synthesizable IP cores that are commonly used by many designs. In cases where placement of library gates is unavoidable, all the instantiated gates may be grouped together to form their own module. This helps in management of library specific aspects of a design 53.2 Clock Logie ) Clock logic including clock gating logic and reset generation should be kept in one block ~ to be synthesized once and not touched again. This helps in a clean specification of the clock constraints. Another advantageis that the modules that are being driven by the clock logic can be constrained using ideal clock specifications. b) Avoid multiple clocks per block — try keeping one clock per block. Such restrictions later help avoid difficulties that may arise while constraining a block containing multiple clocks. It also helps in managing clock skew that may arise at the physical level. Sometimes this becomes unavoidable, for instance where synchronization logic is present to sync signals from one clock domain to the other. For such cases, it is recommended that designer isolate the sync logic, and synthesize it separately using special techniques. This includes setting a dont_touch attribute on the sync logic before instantiating it in the main block. ©) Clocks should be given meaningful names. A suggestion is to keep the name of the clock that reflects its functionality in addition to its frequency. Another good practice is to keep the same name for the clock, ‘uniform throughout the hierarchy, i.c., the clock name should not change as it traverses through the hierarchy. This simplifies the script writing and helps in automating the synthesis process. 4) For DFT scan insertion, it is a requirement that the clocks be controlled from primary inputs. This may involve adding a mux at the clock source for controllability. Although not a hard and fast rule, it is recommended to hand-instantiate the clock-mux in the RTL (preferably at the top-level of the design). This allows the designer to know the instance name of the clock-mux at the stage of RTL elaboration, which in tum allows selective clock-mux timing path to be disabled, using set_disable_timing. 5.3.3 No Glue Logic at the Top The top-level should only be used for connecting modules together. It should not contain any combinational glue logic. One of the benefits of this style is, that it makes redundant the very time consuming top-level compile, which can now be simply stitched together without undergoing additional synthesis. Absence of glue logic at the top-level also facilitates layout, if performing hierarchical place and route. 5.3.4 Module Name Same as File Name ‘A good practice is to keep the module name (or entity name), same as the file name. Never describe more than one module or entity in a single file. A single file should only contain a single module/entity definition for synthesis. This has enormous benefits in defining a clean methodology using scripting languages like PERL, AWK ete. 53.5 — Pads Separate from Core Logic Divide the top-level into two separate blocks “pads” and “core”. Pads are usually instantiated and not inferred, therefore itis preferred that they be kept separate from the core logic. This simplifies the setting of the dont_touch attribute on all the pads of the design, simultaneously. By keeping the pads in ‘a separate block, we are isolating the library dependent part of RTL code. 53.6 Minimize Unnecessary Hierarchy Do not create unnecessary hierarchy. Every hierarchy sets a boundary. Performance is degraded, if unnecessary hierarchies are created. This is because DC is unable to optimize efficiently across hierarchies. One may use the ungroup command to flatten the unwanted hierarchies, before compiling, the design to achieve better results. 5.3.7 Register All Outputs This is a well-known Synopsys recommendation. The outputs of a block should originate directly from registers. Although not always practical, this coding/design style simplifies constraint specification and also helps optimization, This style prevents combinational logic from spanning module boundaries. It also increases the effectiveness of the characterize-write-script synthesis methodology by preventing the pin-pong effect that is common to this type of compilation technique.5.3.8 Guidelines for FSM Synthesis The following guidelines are presented for writing finite state machines that may help in optimizing the logic: a) State names should be described using “enumerated types” in VDL, or “parameters” in Verilog b) Combinational logic for computing the next state should be in its own process or always block, separate from the state registers. ©) Implement the next-state combinational logic with a case statement 5.4 Logic Inference High-level Description Languages (HDLs) like VHDL and Verilog are front- ends to synthesis. HDLs allow a design to be represented in a technology independent fashion. However, synthesis imposes certain restrictions on the manner in which HDL description of a design is written. Not all HDL constructs can be synthesized. Not only that, synthesis expects HDLs to be coded in a specific way so as to get the desired results. We can say that synthesis is template driven — if the code is written using the templates that are understood and expected by the synthesis tool, then the results will be correct and predictable. The templates and other coding pattems for synthesis, are called coding styles. For quality results it is imperative that designers possess a keen understanding of the coding styles, logic inferences, and the corresponding logic structures that DC generates. 5.4.1 Incomplete Sensitivity Lists This is one of the most common mistakes made by designers. Incomplete sensitivity lists may cause simulation mismatches between the source RTL and the synthesized logic. DC issues a warming for signals that are present in the process or always block, but are absent from the sensitivity list. This is primarily a simulation problem since the process does not trigger, when sensitized (because of the missing signal in the sensitivity list). The synthesized logic, however, is generally correct for blocks containing incomplete sensitivity lists. Verilog Example always @(weekend or go_to_beach or go_to_work) begin if (weekend) action = go_to_beach else if (weekday) action = go_to_work; VHDL Example process (weekend, go_to_beach, go_to_work) begin if (weekend) then action <= go_to_beach; elsif (weekday) then action <= go_to_work; end if, end process; ‘The examples illustrated above do not contain the signal “weekday” in their sensitivity lists. The synthesized logic may still be accurate, however, during simulation the process will not trigger each time the signal “weekday” changes value. This may cause a mismatch between the simulation result of the source RTL and the synthesized logic. 5.4.2 Memory Element Inference ‘There are two types of memory elements — latches and flip-flops. Latches are level-sensitive memory elements, while flip-flops in general are edge- sensitive. Latches are transparent as long as the enable to the latch is active. At the time the latch is disabled, it holds the value present at the D inpet, atits Q output. Flip-flops on the other hand, respond to rising or falling edge of the clock. Latches are simple devices, therefore they cover less area as compared to their counterparts, flip-flops. However, latches in general are more troublesome because their presence in a design makes DFT scan insertion difficult, although not impossible. It is also complicated to perform static timing analysis on designs containing latches, due to their ability of being transparent when enabled. For this reason, designers generally prefer flip- flops over latches. ‘The following sub-sections provide detailed information on how to avoid latches, as well as how to infer them, if desired. 5.4.2.1 Latch Inference A latch is inferred when a conditional statement is incompletely specified An if statement with a missing else part is an example of incompletely specified conditional. Here is an example, both in Verilog and VHDL: Verilog Example always @(weekend) begin if (weekend) action <= go_to_beach; end VHDL Example process (weekend) begin if (weekend = 1) then action <= go_to_beach; end process; The above statement will cause the DC to infer a latch enabled by a signal called “weekend”. In the above example, “action” is not given any value ‘when the signal “weekend is 0. Always cover all the cases in order to avoid unintentional latch inference. This may be achieved by using an else statement, or using a default statement outside the if branch. A latch may also get inferred from an incompletely specified case statement in Verilog. define sunny 2'b00 ‘define snowy 2601 “define windy 2'b10 wire [1:0] weather; case (weather) sunny : action snowy : action windy : action endcase; go_motorcyciing; go_skiing; g0_paragliding; In the above case statement only 3 of the 4 possible values of “weather” are covered. This causes a latch to be inferred on the signal “action”. Note, for the above example the Synopsys full_case directive may also be used to avoid the latch inference as explained in Chapter 3. The following example contains the default statement that provides the fourth condition, thereby preventing the latch inference. case (weather) sunny : action <= go_motoreycling; snowy : action <= go_skiing; windy : action <= go_paragliding; default: action <= go_paragliding; endcase; VHDL does not allow incomplete case statements. This often means that the others clause must be used, consequently the above problem does not occur in VHDL. However, latches may still be inferred by VHDL, if a particularoutput signal is not assigned a value in each branch of the case statement. ‘The inference being that outputs must be assigned a value in all branches to prevent latch inference in VHDL. case (weather) is when sunny => action <= go_motorcycling; when snowy => action <= go_skiing; when windy action <= go_paragliding; when others => null; end case; ‘The above example, although containing the others clause will infer latches because the output signal “action” is not assigned a particular value in the others clause. To prevent this, all branches should be completely specified, as follows: case (weather) is when sunny => action <= go_motorcycling; when snowy => action <= go_skiing: when windy => action <= go_paragliding; when others => action <= go_paragliding: end case; 54.2.2 Register Inference DC provides a wide variety of templates for register inference. This is to support different edge-types of the clock and reset mechanisms. A register is inferred, when there is an edge specified in the sensitivity list. The edge could be a positive edge or a negative edge. 5.4.2.2.1 Register Inference in Verilog In Verilog, a register is inferred when an edge is specified in the sensitivity list of an ahvays block. One register is inferred for each of the variables assigned in the always block. All variable assignments, not directly dependent on the clock-edge should be made in a separate alwayss block, ‘hich does not have an edge specification in its sensitivity list. A plain and simple positive edge-triggered D flip-flop is inferred using the following template always @(posedge clk) reg_out <= data; In order to infer registers with resets, the reset signal is added to the sensitivity list, with reset logic coded within the always block. Following is an example of a D flip-flop with an asynchronous reset: always @(posedge clk or reset) if (reset ) reg_out <= 1'b0; else Teg_out <= data; Having a synchronous reset is a simple matter of removing the “reset” signal from the sensitivity list. In this case, since the block responds only to the clock edge, the reset is also, only recognized at the clock edge. always @(posedge clk) if (reset ) reg_out <= 1'b0; else reg_out <= data; Negative edge-triggered flop may be inferred by using the following template: always @(negedge clk) reg_out <= data; Absence of negative edge-triggered flop in the technology library will result in DC inferring a positive edge-triggered flop with an additional inverter to invert the clock signal.5.4.2.2.2 Register Inference in VADL In VHDL a register is inferred when an edge is specified in the process body. ‘The following example illustrates the VHDL template to infer a D flip-flop: reg1: process (clk ) begin if (clk'event and cl reg_out <= data; end if, end process Regt; 1") then i DC does not infer latches for variables declared inside functions, since ates declared inside functions are reassigned each time the function is Coding style template for registers with asynchronous and synchronous resets are similar in nature to that of Verilog templates, shown in previous section. Negative edge-triggered flop may be inferred by using the following template: regi: process (clk ) begin if ( clk’event and clk = 0" ) then reg_out <= data; end if, end process Reg1; Absence of negative edge-triggered flop in the technology library will result in DC inferring a positive edge-triggered flop with an additional inverter to invert the clock signal. 34.3 Multiplexer Inference Depending upon the design requirements, the HDL may be coded in different ‘ways to infer a variety of architectures using muxes. These may comprise of a single mux with all inputs having the same delay to reach the output, or a priority encoder that uses a cascaded structure of muxes to prioritize the input signals. A mixture of the above techniques is also commonly used to place the late arriving signal closer to the output. ‘The correct use of ifand case statements is a complex topic that is outside the scope of this chapter. There are application notes (from Synopsys) and other published materials currently available that explain the proper usage of these statements. It is therefore the intent of this chapter to refer the users to outside sources for this information. Only brief discussion is provided in this section. 5.43.1 Use case Statements for Muxes In general, if statements are used for latch inferences and priority encoders, while case statements are used for implementing muxes. It is recommended to infer muxes exclusively through case statements. The if statements may be used for latch inferencing and priority encoders. They may also be effectively used to prioritize late arriving signals. This kind of prioritizing may be implementation dependent. It also limits reusability To prevent latch inference in case statements the default part of the case statement should always be specified. For example, in case of a state ‘machine, the default action could be that all states covered by the default clause cause a jump to the “start” state. Having a default clause in the case statement is the preferred way to write case statements, since it makes the HDL independent of the synthesis tool. Using directives like full_case ete makes the code dependent on the synthesis tool. If the default action is to assign don’t-cares, then a difference in behavior between RTL simulation and synthesized result may occur. This is because, DC may optimize the don't-cares randomly causing the resulting logic to differ.OL 5.4.3.2 ifversus case Statements ~ A Case of Priorities Multiple ifstatements with multiple branches result in the creation of priority encoder structure. always @{weather or go_to_work or go_to_beach) begin if (weather{0}) action = go_to_work; if (weather1}) action = go_to_beach; end In the above example, the signal “weather” is a two-bit input signal and is used to select the two inputs, “go_to work” and “go to beach”, with “action” as the output. When synthesized, the cascaded mux structure of the priority encoder is produced as shown in Figure 5-2. ‘g0_to_work g0_to_beach action weather{0] weather{1] Figure 5-2, Result of using Multiple ifStatements Tf the above example used the case statement (instead of multiple if statements) in which all possible values of the selection index were covered and were exclusive, then it would have resulted in a single multiplexer as shown in Figure 5-3, go_to_work action go_to_beach weather{0] weather{1] Figure 5-3, Result oF using case Statement, or Single if Statement The same structure (Figure 5-3) is produced, if a single if statement is used, along with elsif statements to cover all possible branches. 5.4.4 Three-State Inference ‘Tri-state logie is inferred when high impedance (Z) is assigned to an output. Axbitrary use of tri-state logic is generally not recommended because of the following reasons: a) Tri-state logic reduces testability b) Tri-state logic is difficult to optimize ~ since it cannot be buffered. This can lead to max_fanout violations and heavily loaded nets. On the upside however, tri-state logic can provide significant savings in area. Verilog assign tri_out = enable ?VHDL tri_out <= tri_in when (enable = ‘1') else ‘2 55 Order Dependency Both, Verilog and VHDL provide variable assignments that are order dependentindependent, Correct usage of these produces desired results, while incorrect usage may cause synthesized logic to behave differently than the source RTL. 5.5.1 Blocking versus Non-Blocking Assignments in Verilog It is important to use non-blocking statements when doing sequential assignments like pipelining and modeling of several mutually exclusive data transfers. Use of blocking assignments within sequential processes may cause race conditions, because the final result depends on the order in which the assignments are evaluated. The non-blocking assignments are order independent; therefore they match closely to the behavior of the hardware. Non-blocking assignment is done using the “<= operator, w operator is used for blocking assignments. the "=" always @(posedge clk) begin firstReg data; secondReg <= firstReg; thirdReg secondReg; end In hardware, the register updates will occur in the reverse order as shown above. The use of non-blocking assignments causes the assignments to occur in the same manner as hardware i.., thirdReg will get updated with the old value of secondReg and the secondReg will get updated with the old value of firstReg. If blocking assignments were used in the above example, the signal “data” would have propagated all the way through to the thirdReg concurrently during simulation. ‘The blocking assignments should generally be used within the combinational always block. Signals versus Variables in VHDL Similar to Verilog, VHDL also provides order dependency through the use of signals and variables. The signal assignments may be equated to Verilog’s non-blocking assignments, ie., they are order independent. The variable assignments are order sensitive and correlate to Verilog’s blocking assignments, Variable assignments are done using the “:=” operator, whereas the “<=” ‘operator is used for signal assignments. The following example illustrates the usage of the signal assignments within the sequential process block. The resulting hardware contains three registers, with signal “data” propagating from firstReg to secondReg and then to the thirdReg. The RTL simulation will also show the same result process(clk) begin if (clk’event and clk = ‘1') then firstReg <= data; secondReg <= firstReg; thirdReg <= secondReg; end if; end process; A general recommendation is to only use signal assignments within sequential processes and variable assignments within the combinational processes.tien niin 5.6 Chapter Summary This chapter highlighted the partitioning and coding styles suited for synthesis. Various guidelines and suggestions were provided to help the user code the RTL correctly with proper partitions, to make effective use of the synthesis engine, ‘The chapter began by suggestions on successful partitioning techniques and why they are necessary, followed by @ short discussion on “what is RTL”. Emphasis was given on “thinking hardware” while coding the design. Next, general guidelines were covered that encompassed various suggestions and techniques, though not essential for synthesis, have significant impact on successful optimization. Adherence to these suggestions produce optimized designs that are well suited for automating the synthesis process, An important section was devoted to the coding styles, and numerous examples were provided as templates to infer the correct logic. These included inference of latches, registers, multiplexers and three-state logic elements. At each step, advantages and disadvantages along with the correct. usage was discussed. The last section described the order dependency feature of both Verilog and VHDL languages. Also discussed were appropriate coding techniques to be used by utilizing the order dependency feature of both languages. CONSTRAINING DESIGNS ‘This chapter discusses the process of specifying the design environment and its constants. It describes various commonly used DC. commands along with other helpful constraints that may be used to synthesize complex ASI designs. Please note that the commands described in this chapter only contain the most frequently used options. Designers are advised to consult the DC reference manual for the entire list of options available to a particular command. ‘This chapter contains information that i useful both for the novice and the advanced users of Synopsys tools. The chapter attempts to focus on “real world” applications, by taking into account deviations from the ideal situation. In other words, “Not all designs ot designers, follow Synopsys recommendations”, Incorporated within the chapter are numerous helpful ideas, marked as © to guide the reader in real time application for selected commands.6.1 Environment and Constraints Im order to obtain optimum results from DC, designers have to methodically constrain their designs by describing the design environment, target objectives and design rules. The constraints may contain timing and/or arca information, usually derived from design specifications. DC uses. these constraints to perform synthesis and tries to optimize the design with the aim of meeting target objectives 6.1.1 Design Environment Up till now, the assumption has been that the design has been partitioned, Coded and simulated. The next step is to describe the design environment, This procedure entails defining for the design, the process parameters, YO ort attributes, and statistical wire-load models. Figure 6-1 illustrates the essential DC commands used to describe the design environment. ~ Set_min_library This is a new command, introduced in DC98 version. ‘The command allows users to simultaneously specify the worst-case and the best-case libraries. This may be useful during initial compiles, Preventing DC from violating the setup-time violations while fixing the hold-time violations. set_min_library
min_version
de_shell> ‘set_min_library “ex25_worst.db” \ ~min_version “ex25_best.db” © The above command may be used for fixing hold-time violations during incremental compile or for in place optimization. In this case, the user should set both minimum and maximum values for the operating conditions. sot_max_capactanco set_max transition & set_max fanout ‘set_operating_conditions on Input & Output ports, ‘on the whole design. orcurent. design Top Level ok set_drive oon Clock, ‘on outputs | sot Joad ssot_driving_cell fon input signals set load con inputs, ‘set_wire_load for each block, including top 'avel Figure 6-1. Basic Design Environ i e , voltage and ~ set_operating_coniitions dessibes the process, voltage ‘and temperature conditions of the design. The Synopsys library cot description of these conditions, usually described as WORST, mew and BEST case. The names of operating conditions are library depen: : Users should check with ther library vendor for comect setting. BY changing the value of the operating condition command, full ranges o' ocess variations are covered. The WORST case operating condition is oe, thereby optimizing a design for maximum semp-time. The BEST ense condition is commonly used to fix the hold-time violations. The TYPICAL case is mostlyignore, sce analysis at WORST and BEST case also covers the set_operating_conditions
dc_shel1> set_operating_conditions WORST © With the introduction of DC98, it is now possible to optimize the design both with the WORST and the BEST case, simultaneously. The optimization is achieved by using the -min and ~max options in the above command, as illustrated below. This is very useful when optimizing the design to fix hold-time violations. dc_she11> set_operating_conditions ~min BEST -max WORST — set_wire_load command is used to provide estimated statistical wire- load information to DC, which in tum, uses the wire-load information to ‘model net delays as a function of loading. Generally, a number of wire- load models are present in the Synopsys technology library, each representing a particular size block. In addition, designers may also choose to create their own custom wire-load models to accurately model the net loading of their blocks. set_wire_load
“mode < top | enclosed | segmented > dc_shel1> set_wire_load MEDIUM -mode top There are three modes associated for modeling wire-load models. These are top, enclosed, and segmented. Generally, only the first two modes are in common use. The segmented wire-load mode is not prevalent, since it relies ‘on the wire-load models that are specific to the net segments. The mode top defines that all nets in the hierarchy will inherit the same wire- load model as the top-level block. One may choose to use this wire-load mode! for sub-blocks, if planning to flatten them later for layout. This mode may also be chosen, if the user is synthesizing the design using the bottom- up compile method, The second mode, enclosed specifies that all nets (of the sub-blocks) will inherit the wire-load model of the block that completely encloses the sub- blocks. For example, if the designer is synthesizing sub-blocks B and C that are completely enveloped by block A (which in tum is completely enclosed by the top-level), then sub-blocks B and C will inherit the wire-load models defined for block A. ‘The last mode, segmented is used for wires crossing hierarchical boundaries. In the above example, sub-blocks B and C will inherit the wire- load models specific to them, while the nets between sub-block B and C (but, within block A) will inherit the wire-load model specified for block A. © Itis extremely important that designers accurately model the wire lozds of their design. Too optimistic or too pessimistic wire-load models result in increased synthesis iterations, in an effort to achieve timing convergence after post-layout. In general, during the pre-layout phase, slightly pessimistic wire-load models are used. This is done to provide cextra timing margin that may get absorbed, by the routed design. set_drive and set_driving_celll are used at the input ports of the block. set_drive command is used to specify the drive strength at the input port. It is typically used to model the extemal drive resistance to the ports of the block or chip. The value of 0 signifies highest drive strength and is commonly utilized for clock ports. Conversely, set_driving_cell is used to model the drive resistance of the driving cell to the input ports, This command takes the name of the driving cell as its argument and applies all design rule constraints of the driving cell to the input ports of the block set_drive
set_driving_cell ~cell
pin
s t_drive 0 {CLK RST} dc_she11> set_driving_cell -cell BUFF1 —pin Z all_inputs() set_load sets the capacitive load in the units defined in the technology library (usually pico farads, or pf), to the specified nets or ports of the design. It typically sets capacitive loading on output ports of the blocks uring pre-layout synthesis, and on nets, for back-annotating the extracted post-layout capacitive information. set_load
dc_she11> set_load 1.5 all_outputs() dc_she11> set_load 0.3 find(net, "blockA/n1234") Design Rule Constraints or DRCs consist of set_max_transition, set_max_fanout and set_max_capacitance commands. These rules are generally set in the technology library and are determined by the process parameters, These rules should not be violated in order to achieve working silicon. Previous releases of DC (v97.08 and before) prioritized DRCs even at the expense of poor timing. However, the latest version C98, prioritizes timing requirements over DRCs. ‘The DRC commands can be applied to input ports, output ports or on the current_design. Furthermore, if the value set in the technology library is not adequate or is too optimistic, then these commands may also be used at the command line, to control the buffering in the design set_max_transition
set_max_capacitance
set_max_fanout
dc_she11> set_max_transition 0.3 current_design dc_shel1> set_max_capacitance 1.5 find(port, ‘out1”) de_she11> set_max_fanout 3.0 all_outputs() 6.1.2 Design Constraints Design constraints describe the goals for the design. They may consist of timing or area constraints. Depending on how the design is constrained, DC tries to meet the set objectives. It is imperative that designers specify realistic constraints, since unrealistic specification results in excess area, increased power and/or degradation in timing. The basic commands to constrain a design are shown in Figure 6-2 ‘set_max_capacitance ‘set_max_transiion & Sot_max_fenout ‘on current design Top Level ok crooa_cock & : set. clock_skow 83 a3 3 2 ¥s ‘sot_max_area for each block Figure 6-2. Basie Constraints for Synthesisee create_clock command is used to define a clock object with a particular period and waveform. The ~period option defines the clock period, while the -waveform option controls the duty cycle and the starting edge of the clock. This command is applied to a pin or port, object types. Following example specifies that the port named CLK is of type “clock” that has a period of 40 ns, with 50% duty cycle. The positive edge of the clock starts at time, 0 ns, with the falling edge occurring at 20 ns. By changing the falling edge value, the duty cycle of the clock may be altered. de_she11> create_clock —period 40 waveform {0 20) CLK In some cases, a block may only contain combinational logic. To define delay constraints for this block, one can create a virtual clock and specify the input and output delays in relation to the virtual clock. To create a virtual clock, designers may replace the port name (CLK, in the above example) with the —name
, in the above command, Altematively, one can use the set_max_delay or set_min_delay commands to constrain such blocks. This is explained in detail in the next section. set_dont_touch_network is a very useful command, usually used for clock networks and resets. This command is used to set a dont_touch property on a port, or on the net, Note setting this property will also prevent DC from buffering the net, in order to meet DRCs. In addition, any gate coming in contact with the “dont_touched” net will also inherit the dont_touch attribute c_she11> set_dont_touch_network {CLK, RST} Suppose, you have a block that takes as input the primary clock, and generates secondary clocks e.g., clock divider logic. In this scenario, you should apply the set_dont_touch_network on the generated clock output port of the block. This will help prevent DC from buffering the clock network. © Caution should be exercised while using set_dont_touch_network ‘command. For instance, if a design that contains gated clock circuitry ‘and the set_dont_touch_network attribute has been applied to the clock input. This will prevent DC to appropriately buffer the gated logic, resulting in the DRC violation for the clock signal. The same will hold true for gated resets. ~ set_dont_touch is used to set a dont_touch property on the current_design, cells, references or nets. This command is frequently used during hierarchical compilation of the blocks. Also, it can be used for, preventing DC from inferring certain types of cells present in the technology library. dc_she11> set_dont_touch current_design de_she1> set_dont_touch find(cell, “sub1") dc_she11> set_dont_touch find(net, “gated_rst") © For example, this command may be used on the block containing spare gates. The command will then instruct DC not to disturb (or optimize) the instantiation of the spare gates block. — set_dont_use command is generally set in the .synopsys_desetup environment file. The command is instrumental in eliminating certain types of cells from the technology library that the user would not want DC to infer. For instance, by using the above command, you can filter out the flip-flops in your technology library whose name start with “SD” as illustrated below. dc_she11> set_dont_use { mylib/SD*}SnaeeneSeTEseaS: set_input_delay specifies the input arrival time of a signal in relation to the clock. It is used at the input ports, to specify the time it takes for the data to be stable after the clock edge. The timing specification of the design usually contains this information, as the setup/hold time requirements for input signals. Given the top-level timing specification of the design, this information may also be extracted for the sub-blocks of the design, by utilizing the top-down characterize compile method, explained in Chapter 7. de_shel1> set_input_delay -max 23.0 —clock CLK {datain} dc_she11> set_input_delay —min 0.0 -clock CLK {datain} In Figure 6-3, the maximum input delay constraint of 23ns and the minimum input delay constraint of Ons is specified for the signal datain with respect to the clock signal CLK, with a 50% duty cycle and a period of 30ns. In other words the setup-time requirement for the input signal datain is Tus, while the hold-time requirement is Ons. + 23 ns —___p¢— 7 ns —p} Figure 6-3. Specification of Input Delay If both —min and -max options are omitted, the same value is used for both the maximum and minimum input delay specifications. III ALIENS “ue — set_output_delay command is used at the output port, to define the time it takes for the data to be available before the clock edge. The timing specification of the design usually contains this information. Given the top-level timing specification of the design, this information may elso be extracted for the sub-blocks of the design, by utilizing the top-down characterize compile method, explained in Chapter 7. dc_she11> set_output_delay -max 19.0 ~clock CLK {dataout} In Figure 6-4, the output delay constraint of 19ns is specified for the signal dataout with respect to the clock signal CLK, with a 50% duty cycle and a period of 30ns. This means that the data is valid for LIns after the clock edge. dataout —_Valid Data — 11 ns 19 ngs ——> Figure 6-4. Specification of Output Delay © During the pre-layout phase, it is sometimes necessary to over-constrain selective signals by a small amount to maximize the setup-time, thereby squeezing extra timing margin in order to reduce the synthesis-layout iterations. To achieve this, one may fool DC by specifying the over- constrained values to the above commands. Remember that over-constraining designs by a large amount will result in unnecessary increase in area and increased power consumption. © A negative value (e.g., -0.5) may also be used to provide extra timing ‘margin while fixing the hold-time violations after layout, by making use ofthe in-place optimization on the design, explained in Chapter 9 ~ Set_clock_skew command lets the user define the clock network delay and clock skew information. There are different options available for this command that facilitate the delay specification, both during the pre and the post-layout phases, as explained in the “Clocking Issues” section. dc_she11> set_clock_skew 2.5 {CLK} Set_clock_transition for some reason does not get as much attention as it deserves. However, this is a very useful command, used during the Pre-layout synthesis, and for timing analysis. Using this command forces DC to use the specified transition value (that is fixed) for the clock port or pin, de_she11> set_clock_transition 0.3 {CLK} © ‘Setting a fixed value for transition time of the clock signal in pre-layout is essential because of a large fanout associated with the clock network. Using this command enables DC to calculate realistic delays for the logic being fed by the clock net based on the specified clock signal transition value. This is further explained in the “clocking issues” section later in the chapter. 6.2 Advanced Constraints This section describes additional design constraints that go beyond the ‘eneral constraints covered in the previous section. These constrains consist of specifying false paths, multicycle paths, max and min delays etc. In addition, this section also discusses the process of grouping timing critical paths for extra optimization It must be noted however, that the use of too many timing exceptions, such as false paths and multicycle paths causes significant impact on the run times. set_false_path is used to instruct DC to ignore a particular peth for timing or optimization. Identification of false paths in a design is critical Failure to do so, compels DC to optimize all paths in order to reduce total negative slack. Consequently, the critical timing paths may be adversely affected due to optimization of all the paths, which also includes the false paths. The valid startpoint and endpoint to be used for this command are the input ports or the clock pins of the sequential elements, and the output ports or the data pins of the sequential cells. In addition, one can further target a particular path using the -through switch. dc_shel1> set_false_path -from int —through U1/Z -to outt © Use this command when the timing critical logic is failing the static timing analysis because of the false paths. ~ set_multicycle_path is used to inform DC regarding the number of clock cycles a particular path requires in order to reach its endpoint. DC automatically assumes that all paths are single cycle paths and will unnecessarily try to optimize the multicycle segment in order to achieve the timing. This may have a direct impact on adjacent paths as well as the area, Also, the command provides the ~through option that facilitates isolating the multicycle segment in a design. dc_shel1> set_multicycle_path 2 -fromU1/Z \ through U2/A\ —to outtatin elena ~ Set_max_delay defines the maximum delay required in terms of time units for a particular path. In general, itis used for the blocks that contain combinational logic only. However, it may also be used to constrain a block that is driven by multiple clocks, each with a different frequency. This command has precedence over DC derived timing requirements. © For blocks, only containing combinational logic, one may either create a virtual clock and constrain the block accordingly, or use this command to constrain the total delay from all inputs to all outputs, as shown below: de_shel1> set_max_delay 5 ~from all_inputs() to all_outputs() © Although, it is preferable to define only a single clock per block, there are situations where a block may contain multiple clocks, each with a different frequency. To constrain such a block, one may define all the clocks in the block using the normal create_clock and Set_dont_touch_network commands. However, it becomes tedious to assign input delays of signals related to individual clocks. To avoid this situation, an altemative approach is to define the first clock (the most stringent one) using the normal approach, while constraining other clocks through the set_max_delay command, as shown below. dc_she11> set_max_delay 0 —from Ck2 \ ~t0 all_registers(clock_pin) The value of 0 signifies that a zero delay value is desired, between the input port CK2, and the input clock pins of all the flops within the block In addition, one may also need to apply the set_dont_touch_network for other clocks. This method is suitable for designs containing gated clocks or resets. ‘Set_min_delay is the opposite of the set_max_delay command, and is used to define the minimum delay required in terms of time units for a Particular path. Specifying this command in conjunction with the Set_fix_hold command (described in Chapter 9) will instruct DC to add delays in the block to meet the minimum time unit specified. This Command also has precedence over DC derived timing requirements. dc_shel1> set_min_delay 3 -from all_inputs() -to all_outputs() indle together timing critical paths in - group_path command is used to bundle toget \ fea for cost function calculations. Groups enable you to prioritize the grouped paths over others. Different options exist for this command, which include specification of critical range and weights, dc_she11> group_path —to {out out2} -name grp1 © Adding too many groups has significant impact on the compile time, ‘Therefore, use it only as a last resort. © Exercise caution while using this command. One may find that using this command increases the delay of the worst violating path, in the design. This is due to the fact, that DC prioritizes the grouped paths over other paths in the design. In order to improve the overall cost function, DC will try to optimize the grouped path over others and may degrade the timing of another group’s worst violator. 6.3 Clocking Issues In any design, the most critical part of synthesis is the clock description. ‘There are always issues conceming the pre and post-layout definitions, tionally in the past, big buffers were placed at the source of the clock to ive the fill lock network. Thick clock spines were used in the layout for even distribution of clock network delays and to minimize clock skews. Although this method sufficed for sub-micron technologies, it is miserably fling in the VDSM resis. The interconnect RC's curently accouat for @ major part of total delay. This is mainly due tothe increase in resistance of the shrinking metal widths. It is difficult, if not impossible to model cloc! using the traditional approach. With the arrival of complex layout tools, capable of synthesizing clock trees, the traditional method has changed dramatically. Since, layout tools have thecell placement information, they are best equipped to synthesize the clock trees. It is therefore necessary to describe clocks in DC, such that it imitates the clock delays and skews of the final layout. 63.1 Pre-Layout For reasons explained above, it is best to estimate the clock tree latency and ‘skew during the pre-layout phase. To do this, use the following commands: dc_shel1> create_clock ~period 40 —waveform {0 20} CLK de_shel1> set_clock_skew —delay 2.5 —uncertainty 0.5 CLK de_she11> set_clock_transition 0.2 CLK de_she11> set_dont_touch_network CLK de_shel1> set_drive 0 CLK For the above example, a delay of 2.5 ns is specified as the overall latency for the clock signal CLK. In addition, the -uncertainty option approximates the clock skew. One can specify different numbers for the setup and hold Uuncertainties by using —minus_uncertainty for the setup-time, and ~plus_uncertainty for the hold-time. For example: dc_shell>set_clock_skew-—delay25 | ~minus_uncertainty 2.0 ~plus_uncertainty 0.2 CLK Furthermore, specification of clock transition is essential. This restricts the ‘max transition value of the clock signal. The delay through a cell is affected by the slope of the signal at its input pin and the capacitive loading present at the output pin. The clock network generally feeds large endpoints. This means, that although the clock latency value is fixed, the input transition time of the clock signal to the endpoint gates will still be slow. This results in DC calculating unrealistic delays (for the endpoint gates), even though in reality, the post-routed clock tree ensures fast transition times. ; 63.2 Post-Layout Defining clocks after layout is relatively easy, since the user does not need to worry about the clock latency and skews. They are determined by the quality of the routed clock tree. Some layout tools provide direct interface to DC. This provides a smooth mechanism for taking the routed netlist consisting of clock tree, back to DC. If this information is not present, then the user should extract the clock latency and the skew information from the layout tool. Using the pre-layout approach, this information can be used to define the clock latency and clock skew, as described before. If however, the netlist can be ported to DC, then the following commands may be used to define the clocks. For example: dc_she11> create_clock period 40 -waveform {0 20} CLK de_she11> set_clock_skew ~propagated CLK \ ~minus_unoertainty 0.5 \ ~plus_uncertainty 0.05 CLK dc_shel1> set_dont_touch_network CLK dc_shel1> set_drive 0 CLK Notice the absence of the —delay option and inclusion of ~propagated option. Since, we now have the clock tree inserted in the netlist, the user should propagate the clock instead of fixing it to a certain value. Similarly, the set_clock_transition command is no longer required, since DC will now calculate the input transition value of the clock network, based on the clock tree. In addition, a small clock uncertainty value may also be defined. This, ensures a robust design that will function taking into account a wider process variance. Some companies do not possess their own layout tools, but they rely on outside vendors to perform the layout. This situation of course varies from cone company to the other. If the vendor provides the user, the post-routed netlist containing the clock tree, then the above method can be utilized. In some instances, instead of providing the post-routed netlist, the vendor onlysupplies the SDF file containing point-to-point timing for the entire clock network (and the design), In such a case, the user only needs to define the clock for the original netlist and back-annotate the SDF to the original netlist without propagating the clock. The clock skews and delays will be determined from the SDF file, when performing static timing analysis. 63.3 Generated Clocks Many complex designs contain intemally generated clocks, An example of this is the clock divider logic that may be used to generate secondary clock(s) of different frequency, derived from the primary clock source. If the primary clock has been designated as the clock source, then a limiting factor of DC is that does not automatically create a clock object for the generated clocks. Top Level oa CLK Figure 6-5. Specification of Generated Clock(s) Consider the logic illustrated in Figure 6-5. A clock divider circuit in clk_div block, is used to divide the frequency of the primary clock CLK by half, and then generate the divided clock that drives Block A. The primary clock is also used to clock, Block B and is buffered internally (in the clk_div block), before feeding Block B. ‘Assignment of clock object through create_clock command on CLK input to the top-level is sufficient for the clock feeding block B. This is because the cIKB net inherits the clock object (through the buffer) specified at the primary source. However, c/kA is not so fortunate. DC is unable to propagate the clock object throughout the entire net because the specification of clock object on primary source CLK stops at the register (shown as shaded flop). To avoid this situation, the clock object for clk should be specified on the output port of the clk_div block. The following commands may be used to specify the clocks for the above example: dc_she11> create_clock -period 40 -waveform {0 20} CLK dc_she11> create_clock -period 80 -waveform {0 40} 4 find(port, “cIk_diviclkA") 6.4 Putting it Together Example 6.1 provides a brief overview of some of the commands described in this chapter. Example 6.1 [eatnanenaenannnansasnnsnnaenasnaaensensenneey /* Design entry */ analyze —format verilog sub1.v elaborate sub1 analyze format verilog sub2.v elaborate sub2 analyze ~format verilog top_block.v elaborate top_block current_design top_block uniquify check_designPERS ORAA NONE E SEARO RARE EREEEEEHEERNNMEEEE) /* Setup operating conditions, wire-load, clocks, resets */ set_wire_load large_w ~mode enclosed set_operating_conditions WORST set_max_transition 1.0 top_block create_clock —period 40 -waveform {0 20} CLK set_clock_skew —delay 2.0 -minus_uncertainty 2.5 \ ~plus_uncertainty 0.2 CLK set_dont_touch_network (CLK RESET} [Manannsasenacereaveneeesnnssssaasnnneenenten /* Input drives */ set_driving_cell cell buff3 -pin Z all_inputs() set_drive 0 (CLK RST} [MAAS RRR SeNNNEENNSNRAaennnNEERESENeeeeeeee) /* Output loads */ set_load 0.5 all_outputs() [enennenansseanererernneennaatiateennaeennts/ /* Set input & output delays */ set_input_delay 10.0 ~clock CLK all_inputs() set_input_delay ~max 19.0 clock GLK { IN1 IN2} set_input_delay —min -2.0 -clock CLK IN3 Set_output_delay 10.0 —clock CLK all_outputs() [stnenssnannenasnenneneneenssnenssneenensenen) /* Advanced constraints */ group_path from IN4 ~to OUT2 -name grp1 Set_false_path from INS ~to subt/dat_reg* ‘set_multicycle_path 2 —from subt/addr_regiCP \ to sub2/mem_reg/D SSRePnSSaTESeES: HOO e re rnreereneeEeeeeeeeeeeit peteeeees /* Compile and write the database */ compile current_design top_block write -hierarchy output top_block.db write format verilog —hierarchy output top_block.sv senesseseennasssseeennnasasey [eeennnaneennnney / Generate timing reports */ report_timing ~nworst 50 65 Chapter Summary This chapter described all the basic and advanced commands used in DC, along with numerous tips to enhance the synthesis process. Focus was also given to the real time issues facing the designers as they descend deep into the VDSM technology. A separate section was dedicated to issues related to clocks. This section described various techniques useful for specifying clocks, both for pre and post-layout. This section also included a topic on specification of generated clocks that are present in almost all designs. Finally, example DC scripts were included to guide the users to perform complex and successful synthesis.OPTIMIZING DESIGNS Ideally, a synthesized design that meets all timing requirements and occupies, the smallest area is considered fully optimized. To achieve this goal, one ‘must understand the behavior of synthesis process. This chapter guides the reader to successfully optimize the design to obtain the best possible results. 7.1 Design Space Exploration To achieve the smallest area while maximizing the speed of the design requires a fair amount of experimentation and iterative synthesis. The process of analyzing the design for speed and area to achieve the fastest logic with minimum area is termed ~ design space exploration, Various factors influence the optimization process, primarily the coding style. While coding, designers generally focus on the functionality of the design and may not consider the synthesis guidelines, previously explained in Chapter 5 (This is a fact of life, we just have to live with it). Ata later stage modifications to the HDL code are performed to facilitate the synthesisprocess. In reality, the HDL is generally fixed and only minor modifications are done, since major changes may impact other blocks or test benches, For this reason, changing the HDL code to help synthesis is less desirable. For the sake of design space exploration, we can assume that the HDL code is frozen. It is now the designer's responsibility to minimize the area and ‘meet the target timing requirements through synthesis and optimization, Starting from version 98 of DC (or DC98) the previous compile flow changed. The timing is prioritized over area. This is shown in Figure 7-1. Another major difference between DC98 and previous versions is that, DC98 performs compilation to reduce “total negative slack” instead of “worst negative slack”. This ability of DC98 produces better timing results but has some impact on the area. Also, in previous versions area minimization was handled automatically, however, DC98 requires designers to specify area constraints explicitly. Generally some area cleanup is performed by default even without specifying the area constraints but better results are obtained by including the constraints for area Delay y Design Rule Fixing | 7a Tae Compilation Flow Prior to DC98 Design Rak ee Design Rule Fixing Leaf J} (Delay Prioritized) Coe Ff C98 Compilation Flow Figure 7-1, DC98 Changes the delay is prioritized over area, it is extremely important to provide DC with Teale consramts, Some designer while peroing bottom-up compile, fail to realize this point and over constrain the design This causes DC to bloat the logic in order to meet the unrealistic timing goals. This is especially true for DC98 because it works on the reduction of total negative slack. This relationship between constraints and area is eee in Figure 7-2, which emphasizes that the area increases considerably wit tightening constraints. Aca Constants Figure 7-2. Constraints versus Area Another representation of varying constraint is shown in Figure 7-3. This illustrates the relationship between constraints and delay across the design. It is shown that the actual delay of the logic decreases with tightening constraints, while relaxed constraints produce increased delay across the design. The horizontal part ofthe line on the left denotes that the constraints are so tight that further tightening of the constraints will not result in reduction of delay. Similarly the horizontal part of the line on the right signifies fully relaxed constraints, resulting in no further increase in delay.Oe Delay Constraints Figure 7-3. Constraints versus Delay To further explain this concept, consider the diagram shown in Figure 7-4, For overly constrained design, DC tries to synthesize “vertical logic” to meet the tight timing constraints. However, if the timing constraints are non existent, the synthesized design would result in “horizontal logic”, violating the actual timing specifications. 1, oveny want [+ Soetrane vel ee T 1 ' ' eer Figure 7-4, Horizontal versus Vertical Logic OEE IELIESS PEEING feo ‘The idea here is to find a common ground, by specifying realistic timing constraints. It is recommended to over constrain the design by a small amount (maybe 10 percent tighter than required) to avoid too many synthesis-layout iterations. This produces minimum area for the design, while still meeting the timing specifications. For this reason, choose the correct compile methodology, described in subsequent sections, 7.2 Total Negative Slack The previous section briefly introduced the phrase “Total Negative Slack” or TNS for short. With the advent of DC98, a lot of importance has been given to this, and designers need to understand this concept to perform successful logic optimization. Prior to DC98 version, DC would optimize the logic based on “Worst Negative Slack” or WNS. The WNS is defined as the timing violation (or negative slack) of a signal traversing from one startpoint to the endpoint for a particular path. During compile, DC would reduce the WNS one by one, in order to reduce total violations of the block. For this reason, grouping paths and specifying the critical range for timing-critical segments was considered essential. Sins wot D > LD ot eos eae 2s. wea Zane cux Figure 7-5, Total Negative Slackeee C98 not only, prioritizes delay over area but also targets TNS instead of WNS. To understand the concept of TNS consider the logic diagram shown in Figure 7-5. The WNS in this case is ~Sns; from Regd to RegB. The TNS is the summation of all WNS per endpoint and in this case equals ~8ns, ie., WNS to Reg plus WNS to RegB. ‘There are several advantages to using this technique; primarily it produces lesser timing violations as compared to the previous method. Another benefit is, when using bottom-up compile methodology, the critical paths present at the sub-module may not be seen as critical from the top level. Reducing TNS of the overall design minimizes this effect. By providing smaller number of violating paths to the timing driven layout tool, less iterations between synthesis and layout can also be achieved. Although, reduction of TNS over WNS produces less timing violations, it does have an impact on the overall area. It is recommended that you set area constraints, regardless of the kind of optimization performed. By default, DC98 prioritizes TNS over area. Area optimization occurs only for those paths with positive slack. In order to prioritize area over TNS, you may use the following command: dc_shel1> set_max_area 0 -ignore_tns 7.3 Compilation Strategies Synopsys recommends the following compilation strategies that depend entirely on how your design is structured and defined. It is up to user discretion to choose the most suitable compilation strategy for a design. ) Top-down hierarchical compile method. b) Time-budget compile method. ©) Compile-characterize-write-script-recompile (CCWSR) method. 7.31 Top-Down Hierarchical Compile Prior to the release of DC98, the top-down hierarchical compile method was generally used to synthesize very small designs (less than 10K gates). Using this method, the source is compiled by reading the entire design. Based on the design specifications, the constraints and attributes are applied, only at the top level. Although, this method provides an easy push-button approach to synthesis, it was extremely memory intensive and viable only for very small designs. ‘The release of DC98 provides Synopsys the capability to synthesize million gate designs by tackling much larger blocks (>100K) at a time. This indeed may be a feasible approach for some designs depending on the design style (Single clock etc.) and other factors. You may use this technique to synthesize larger blocks at a time by grouping the sub-blocks together and flattening them to improve timing. The advantages and disadvantages of this methodology are summarized below: 73.1.1 Advantages a) Only top level constraints are needed ; ») Better results due to optimization across entire design. 73.1.2 Disadvantages a) Long compile times (although, DC98 is much faster than previous releases). b) Incremental changes to the sub-blocks require complete re-synthesis. ©) Does not perform well, if design contains multiple clocks or gene-ated clocks.7.3.2 Time-Budgeting Compile The second compilation approach to synthesis is termed as the time- budgeting strategy. This strategy is useful, if the design has been partitioned properly with timing specifications defined for each block of the design, ie designers have time budgeted the entire design, including the inter-block timing requirements. ‘The designer manually specifies the timing requirements for each block of the design, thereby producing multiple synthesis scripts for individual blocks. The synthesis is usually performed bottom-up i, starting atthe lowest level and ascending to the topmost level of the design. This method targets medium to very large designs and does not require large amounts of memory. Consider the following design, illustrated in Figure 7-6, The top level module incorporates blocks A and B. The specifications for both of these blocks are well defined and can be directly translated to timing constraints. For designs like these, the time-budgeting compilation strategy is ideal Figure 7-6. Design Suited for Time-Budgeting Compilation Strategy ‘This advantages and disadvantages of this methodology are listed below 73.2.1 Advantages a) Easier to manage the design because of individual scripts. ») Incremental changes to the sub-blocks do not require complete re- synthesis of the entire design. ©) Does not suffer from design style e.g., multiple and generated clocks are easily managed. 4) Good quality results in general because of flexibility in targeting and optimizing individual blocks. 73.2.2 Disadvantages a) Tedious to update and maintain multiple scripts. ) Critical paths seen at the top-level may not be critical at lower level ¢) The design may need to be incrementally compiled in order to fix the DRC’ Figure 7-7 illustrates the directory structure and data organization, suited for this strategy. To automate the synthesis process, a makefile is used (refer to Appendix). The makefile specifies, the dependencies of each block and employs the user specified scripts (kept in the script directory) to compile the whole design, starting from the lowest level and ending at the top-most level ‘After the synthesis of each block, the results are automatically moved to their respective directories. The variables used in the makefile are defined in the users. .cshre file, for eg, SSYNDB may be defined as: ‘/home/project/design/syn/dbSeSSeeenTaeaeeeta Figure 7-7. Directory Structure 73.3 Compile-Characterize-Write-Script-Recompile The last compilation strategy covered here, is the Compile-characterize- write-script-recompile (CCWSR) method. This is an advanced synthesis approach, useful for medium to very large designs that do not have good inter-block specifications defined, This method is not limited by hardware ‘memory and allows for time budgeting between the blocks. This approach requires constraints to be applied at the top level of the design with each sub-block compiled beforehand. The sub-blocks are then characterized using the top-level constraints. This in effect propagates the required timing information from the top-level tothe sub-blocks. Performing 4 write_script on the characterized sub-blocks generates the constraint file for each sub-block. These constraint files are then used to re-compile each block of the design. ‘This approach normally produces the best results, since DC has been given the freedom of time budgeting between the sub-blocks of the design. 7.33.1 Advantages a) Less memory intensive. b) Good quality of results because of optimization between sub-blocks of the design. ¢) Produces individual scripts, which may be modified by the user. 7.3.3.2 Disadvantages a) The generated scripts are not easily readable. b) Synthesis suffers from Ping-Pong effect. In other words it may be difficult to achieve convergence between blocks. ©) A change at lower level block generally requires complete re-synthesis of the entire design Example 7.1 illustrates the script that may be used to perform the CCWSR. method. Example 7.1 CCWSR synthesis script veseunawenesvansey pe /* Setup environment */ SRC jet_unix_variable("SRC") all_modules = {A, B, top}[rteeesteussvees ossaneneunnessscnssnnunsssnneasesse/ /* Read the source and compile each module */ foreach (module, all_modules) { module_source = module + ".v" ead —format verilog SRC + "/" + module_source current_design module link uniquify /* apply module attributes and constraints */ module_script = module + ".scr" include module_script / compile each module */ compile ungroup -flatten —all first_dbs = module + "_first.db" reset_design write -hierarchy -output first_dbs stesesaanasnerssansenuesansenee] (* Gather full design & specify top-level constraints */ foreach (module, all_modules) { first_dbs = module +" first.db" read first_dbs } read -format verilog SRC + */" + top.v current_design top link include top.scr /** top-level constraints **/ write hierarchy ~output top_first.db poe /* Characterize all instances in the design */ all_instances = {A, B, top} characterize -constraint all_instances ppvtewenavnses * Generate scripts for each module */ foreach (module, all_modules) { current_design module char_module_script = module + ".wscr" write_script > char_module_script } re /* Recompile each module */ foreach (module, all_modules) { remove_design -all module_source = module + "v" read -format verilog SRC + "/" + module_source current_design module link uniquify char_module_script = module + ".wscr" include char_module_seript htewetaunesawasousswessuvasenesen / Recompile the module */ sewesuety compile module_db = module + ".db" write -hierarchy -output module_dbUsually, Some designers prefer the time-budgeting method, while others swear by the CCWSR method. However, for most designs the mix of these strategies provides the best results. Remember that it is possible to over constrain the design using the top-down compile or the time-budget method. 74 Resolving Multiple Instances Before proceeding for optimization, one needs to resolve multiple instances of the sub-blocks of your design. This is a required step, since DC does not Permit compilation until the multiple instances present in the design are resolved. To better explain the concept of multiple instantiations of a module, consider the architecture ofa design shown in Figure 7-8. Lets presume that you have chosen the time-budgeting compilation strategy and have synthesized moduleA separately. You are now compiling moduleB that instantistes moduleA twice as Ul and U2. The compilation will be aborted by DC with am error message stating that moduleA is instantiated 2 times in moduleB. There are two recommended methods of resolving this. You may either assign a dont_touch attribute to module before reading moduleB, or uniquify moduleB. uniquify is a dc_shell command that in effect creates unigue definitions of multiple instances. In this case, it will generate moduleA Ul and moduleA_U2 (in Verilog), corresponding to instance UI and U2 respectively as illustrated in Figure 7-9. ‘module Figure 7-8. Non Uniquified Design moduleA_Ut moduleA_u2 Figure 7-9. Uniquified Design : design, regardless of ‘he It is recommended to always uniquify the design i compilation methodology chosen. The reason for this suggestion becomes evident while planning to perform clock tree synthesis during layout. This will be explained in detail in Chapter 9, 7.5 Optimization Techniques This section describes various optimization techniques used to fine tune your sign. Before we start on this subject, it is important to know that DC uses ‘cost functions to optimize the design. This topic is thoroughly covered in the DC Reference manual, therefore will not be dealt with here. I would instead like to concentrate more on the practical optimization techniques instead of the mathematical algorithms used for calculating cost functions. Let’s jut say that DC calculates the cost functions based on the design constraints an DRCs to optimize the design.7.5.1 Compiling the Design Compilation of the design or modules is performed by the compile command. This command performs the actual mapping of the HDL code to gates from the specified target library. DC provides a range of options for this command, to fully control the mapping optimization of the design. The command syntax along with most commonly used options is described below. compile -map_effort
incremental_mapping -in_place =no_design_rule | -only_design_rule ~scan By default, compile uses ~map_effort medium, which usually produces ideal results for most of the designs. It also uses the default settings for structuring and flattening attributes, described in the next section. The “-map_effort high”, should only be used, if the target objectives are not achieved through the default compile. This option enables DC to maximize its effort around the critical path by restructuring and re-mapping of logic, in order to meet the specified constraints. Beware, this usually produces long compile times. ‘The —incremental_mapping option is used, only after the initial compile (ie., the design has been mapped to gates of the technology library), as it performs only at the gate level. This is a very usefull and commonly used option. It is generally used to improve the timing of the logic and to fix DRCs. During incremental compile, DC performs various mapping optimizations in order to improve timing. Although Synopsys states that the resultant design will not worsen and may only improve in terms of design constraints; on rare occasions, using the above option may actually degrade the timing objectives. Users are therefore advised to experiment and use their ‘own judgement. Nevertheless, the usefulness of this command is apparent while fixing DRCs at the top level of the design. To perform this, you may use -only_design_rule option while compiling incrementally. This prevents I IAIN fet 1g optimizations and concentrate only on fixing, DC from performing mappi the DRCs. ‘The —no_design_rule option is not used frequently and as the name suggests, it instructs DC to refrain from fixing DRCs. You may use this option for initial passes of compile, when you don’t want to waste time fixing DRC violations. At a later stage, generate the constraint report and then re-compile incrementally to fix DRCs. This is obviously a painful approach and users are advised to make their own judgement. To achieve post layout timing convergence, it is sometimes necessary to resize the logic to fix timing violations. The —in_place option provides the capability of resizing the gates. Various switches available to designers to control the buffering of the logic govern this option. The usage of this option is described in detail in Chapter 9. ‘The scan option uses the test ready compile feature of DC. This option instructs DC to map the design directly to the scan-flops — as opposed to synthesizing to normal flops before replacing them with their scan equivalents, in order to form the scan-chain. An advantage of using this, feature is that, since the scan-flops normally have different timing associated with them as compared to their non scan equivalent flops (or normal flops), using this techniques makes DC take the scan-flop timing into account while synthesizing. This produces optimized scan inserted logic with correct timing. 75.2 Flattening and Structuring Before we begin this discussion, it must be noted that the term “flattening” used here does not imply “removing the hierarchy”. Flattening is a common academic term for reducing logic to a 2-level AND/OR representation, DC uses this approach to remove all intermediate variables and parenthesis, (using boolean distributive laws) in order to optimize the design. This option is set to “false” by default,Figure 7-10, Optimization Steps ZT ptimization of the design is performed in two phases as shown in iaure 7-10. The logic optimization is performed intially by structuring and attening the design, The resulting structure is then mapped to gates, using mapping optimization techniques. The default a Structure attributes are: stings fr fen snd Table 7-1. Default Settings for Flaten and Structure Attributes Attribute Value flatten ae —Sructure fl | Structure (timing ee structure (boolean) fie As shown in the above table, flattening (set_fatten true) the design and Boolean optimization (set = f es (Set_structure ~boolean true) is only performed ON EEREELENET EDEN w 752.1 Flattening Flattening is useful for unstructured designs for e.g., random logic or control logic, since it removes intermediate variables and uses boolean distributive laws to remove all parenthesis. It is not suited for designs consisting of structured logic e.g, a carry-look-ahead adder or a multiplier. Flattening results in a two-level, sum-of-products form, resulting in a vertical logic, i.e., few logic levels between the input and output. This generally results in achievement of faster logic, since the logic levels between the inputs and outputs are minimized. Depending upon the form of design flattened and the type of effort used, the flattened design can then be structured before the final technology mapping optimization. This is a recommended approach and should be performed to reduce the area, because flattening the design may cause a significant impact on the area of the design. ‘A point to remember, if you flatten the design using “-map_effort high” option, DC may not be able to structure the design, therefore use this attribute judiciously. In general, compile the design using default settings, since most of the time they perform adequately. Designs failing timing objectives may be flattened, with structuring performed as a second phase (on by default). If the design is still failing timing goals, then tum off structuring and flatten only. You may also experiment by inverting the phase assignment that sometimes produces remarkable results. This is done by setting the -phase option of the set_fiatten command to “true”. This enables DC to compare the logic produced by inverting the equation versus the non-inverted form of the equation. For a hierarchical design, flatten attribute is set only on the current_design. All sub-blocks do not inherit this attribute. If you want to flatten the sub- blocks, then you have to explicitly specify using the design option. The syntax for the flatten attribute along with most commonly used options is: set_flatten
~design
effort
~phase
SE 7.5.2.2 Structuring Structuring is used for designs containing regular structured logic, for eg., a carry-look-ahead adder. It is enabled by default for timing only, When structuring, DC adds intermediate variables that can be factored out. This enables sharing of logic that in tum results in reduction of area, For example: Before Structuring After Structuring P xtayte P =alte Q ty+z Q=ltz T =xty It is important to note that, structuring produces shared logic that has an impact on the total delay of the logic. With the absence of specified timing constraints (or structuring is tumed off with respect to timing), the logic produced will generally result in large delays across the block boundaries, Therefore, it is recommended that realistic constraints be specified, in addition to using the default settings. Structuring comes in two flavors: timing (default) and boolean optimization, The latter is a useful method of reducing area, but has a greater impact on timing. Good candidates for boolean type of optimization are non critical timing circuitry e.g., random logic structures and finite state machines, As the name suggests, this algorithm uses boolean logic optimization to reduce area. Prior to version v1997.01, DC used a different algorithm to perform boolean optimization. Synopsys have since introduced another algorithm that is more efficient and requires less run time. This algorithm is based on automatic test pattern generation (ATPG) techniques to manipulate logic networks. To enable this algorithm, you have to set the following variable to “true” compile_new_boolean_structure = true As with flattening, the set_structure command applies only to the Current_design. The syntax of this command along with most commonly used options is: OEEIELIESS TILSEN ane set_structure
~design
~boolean
timing
In general, the design compiled with default settings produce satisctory results. However, if your design is non-timing critical and you want to minimize for area only, then set the area constraints (set_max_area 0) and perform boolean optimization. For all other cases, structure with respect to timing only. 7.5.3 Removing Hierarchy By default, DC maintains the original hierarchy of the design. The hierarchy is in-effect a logical boundary, which prevents DC from optimizing across this boundary. Many designers create unnecessary hierarchy for unknown reasons. This not only makes the synthesis process more cumbersome but also results in an increased number of synthesis scripts. As mentioned before, DC optimizes within logical boundaries. Having needless hierarchy in the design limits DC to optimize within that boundary without optimizing across the hierarchy. Consider the logic shown in Figure 7-11(a). The top level (Block T) incorporates two blocks, A and B. The logic present at the output of block A and at the input of block B are separated by the block boundaries. Two separate optimizations of block A and B may not result in optimal solution. By combining block A and B (i.e, removing the boundaries) as shown in Figure 7-11(b), the two logic bubbles may be optimized as one, resulting in a more optimal solution. Designers (not Synopsys) refer to this process as “flattening” the design,>“ +}@® ° ka ooa8 ° D & > seat ® Figure 7-11. Ungrouping the Design to Improve Timing To perform this, you may use the following command: de_she11> current_design BlockT dc_she11> ungroup —flatten —all 7.54 Optimizing Clock Networks Optimizing clock networks is one of the hardest operations to perform. This is due to the fact that as we descend towards VDSM technologies, the resistance of the metal increases dramatically causing enormous delays from the input of the clock pin to the registers. Also, low power design techniques require gating the clock to minimize the switching of the transistors when ata is not needed to be clocked. This technique uses a gate (e.g., an AND gate), with inputs for clock and enable (used to enable or disable the clock source). Previous methodologies included placement of a big buffer at the top level of the chip, near the clock source capable of driving all the registers in the design. Thick trunks and spines (¢.g, fishbone structure) were used to fan the entire chip in order to reduce clock skew and minimize RC delays. Although this approach worked satisfactorily for technologies 0.5um and above, it is definitely not suited for VDSM technologies (0.35um and less) The above approach also meant increased number of synthesis-layout iterations. With the advent of complex layout tools, it is now possible to synthesize the clock tree within the layout tool itself. The clock tree approach works best for VDSM technologies and although power consumption is a concem, the clock latency and skew are both minimal compared to the big buffer approach, The clock tree synthesis (CTS) is performed during layout after the placement of the cells and before routing. This enables the layout tool to know the exact placement location of the registers in the floorplan. It is then easy for the layout tool to place buffers optimally, so as to minimize clock skews. Since optimizing clocks are the major cause in increased synthesis- layout iterations, performing CTS during layout reduces this cycle. We still have to optimize the clocks during synthesis before taking it to layout. We cannot assume that the layout tool will give us the magic clock tree that will solve all of our problems. Remember the more optimized your initial netlist, the better results you will get from the layout tool. So how do we optimize clock networks during synthesis? By setting a set_dont_touch_network to the clock pin, you are assured that DC will not buffer up the network in order to fix DRCs. This approach works fine for ‘most designs that do not contain clock-gating logic. But what if the clocks are gated? If you set the set_dont_touch_network on the clock that is gated then DC will not even size up the gate (let’s assume a 2-input AND gate), This is because, the set_dont_touch_network propagates through all the combinational logic (AND gate, in this case), until it hits an endpoin’ (input clock pin of the register, in this case). This causes the combinational logic to inherit the dont_touch attribute also, which results in un-optimized gating logic that may violate DRCs, hence overall timing.eee een ISeetaseS For instance, suppose the clock output from the AND gate is fanning out toa large number of registers and DC inferred a minimum drive strength for the AND gate. This will cause slow input transition times being fed to the registers resulting in horrendous delays for the clock net. To avoid this, you may remove the set_dont_touch_network attribute and perform incremental compilation. This will size up the AND gate and also insert additional buffers from the output of the AND gate, to the endpoints Although, this approach seems ideal, it does suffer from some shortcomings, Firstly, it takes a long time for incremental compile to complete, and on rare occasions may produce sub-optimal results. Secondly, a lot of foresight is needed, for e.g., you need to apply set_dont_touch_network attribute on all other nets (resets and scan related signals that may not require clock tree). A second approach is to find all high fanout nets in your design using the report_net command and buffer it from point to point using the balance_buffer command. (Refer to the DC reference manual for actual syntax for this command). Since, the balance_buffer command does not take clock skew into account, it should not be used as an altemative to clock tree synthesis. ‘Another technique is to perform in-place-optimization (IPO), using “compile ~in_place”, with compile_ok_to_buffer_during_inplace_opt switch, set to “false”. This prevents DC from inserting additional buffers and will only size up the AND gate. It must be noted that the above mentioned techniques are totally design dependant. Various methods have been provided that may be used for clock network optimization, Sometimes, you may find that you have to perform all the above methods to get optimal results and other times a single approach works perfectly. Regardless of which method you use, you should also consider what you want to do during layout. For designs without gated clocks, it is preferable that CTS be performed at the layout level. For other design, with gated clocks, you have to analyze the clock in the design (pre and post-synthesis) carefully and take appropriate action. This may also include inserting the lock tree (during layout) after the AND gate for each bank of registers. Most layout tool vendors have realized this problem and offer various techniques to perform clock tree synthesis for gated clocks. 7.8.5 Optimizing for Area By default, DC tries to optimize the design for timing. Designs that are non- timing critical but area intensive should be optimized for area. This can be done by initially compiling the design, with specification of area requirements, but no timing constraints. In other words, the design is synthesized with area requirements only. No timing constraints are used. In addition, one may choose to eliminate the high-drive strength gates by assigning the dont_use attribute on them. The reason for eliminating high- drive strength gates is that they are normally used to speed up the logic in order to meet timing, however, they are larger in size, By eliminating their usage, considerable reduction in area may be achieved. Once the design is mapped to gates, the timing and area constraints should again be specified (normal synthesis) and the design re-compiled incrementally. The incremental compile ensures that DC maintains the previous structure and does not bloat the logic unnecessarily. 7.6 Chapter Summary Optimizing design, is the most time consuming and difficult task, since it depends enormously on various factors e.g., HDL coding styles, type of logic, constraints etc. This chapter described advanced optimization techniques and how they affect the synthesis process, A detailed description of the impact on timing and area by varying design constraints is discussed. To reiterate, the best results are achieved by providing DC with realistic constraints. Over constraining a design results in large area and sub-optimal results With the introduction of DC98, the optimization flow has changed with more emphasis given on the timing, rather than area. Although by default areacleanup is always performed at the end of compilation, regardless, it recommended that area constraints be specified. The timing optimization is, performed by DC98 by minimizing the TNS. Prior to DC98, timing optimization was performed by reducing WNS per endpoint. The TNS optimization provides far superior results, although it does have an impact on the overall area of the design. Various compile strategies are illustrated in this chapter, along with examples to automate this process. To successfully optimize a design, you may choose a single methodology or mix these strategies to get the desired result. All the strategies have their own advantages and disadvantages, which have also been illustrated. Choose the one, which best suits your design. ‘A separate section was devoted to uniquifying the design. Although, this step may not be needed, as argued by some designers, it is recommended to always uniquify the design because of reasons outlined in Chapter 9 Finally, other optimization steps are discussed that emphasize on “how to produce optimal synthesized netlists”. Various techniques, including clock network optimization and optimizing designs for area, were described along, with recommended approaches. DESIGN FOR TEST ‘The Design-for-Test or DFT techniques are increasingly gaining momentum among ASIC designers. ‘These techniques provide measures to comprehensively test the manufactured device for quality and coverage. Traditionally, testability was considered as an after thought, with implementation done only at the end of the design cycle. This approach usually provided minimal coverage and often led to unforeseen problems that resulted in increased cycle time. Merging testability features early in the design cycle was the final solution, creating the name Design-for-Test. 81 ‘Types of DFT Various vendors, including Synopsys provide solutions for incorporating testability in the design. Synopsys adds the DFT capabilities to DC through its Test Compiler (TC) that is incorporated within the DC suite of tools. The ‘main DFT techniques that are currently in use today are: 1) Scan insertion b) Memory BIST insertion ) Boundary-Scan insertionOf all three, scan insertion is the most complex and challenging technique, since it involves various design issues that need to be resolved, in order to get full coverage of the design. : 8.1.1 Memory BIST Unfortunately, Synopsys does not provide any solution for automatic memory BIST (Built-In-Self-Test) generation, Due to this reason memory BIST insertion is not covered in this section. However, there are vendors that eo @ complete solution, therefore a brief overview describing the main function of the memory BIST is included, providing des insight into this useful technique. ee The memory BIST is comprised of controller logic that uses various algorithms to generate input patterns that are used to exercise the memory elements of a design (say a RAM). The BIST logic is automatically generated, based upon the size and configuration of the memory element, It is generally in the form of synthesizable Verilog or VHDL, which is inserted in the RTL source with hookups, leading to the memory elements. Upon triggering, the BIST logic generates input pattems that are based upon pre- defined algorithm, to fully examine the memory elements. The output result is fed back to the BIST logic, where a comparator is used to compare what Went in, against what was read out. The output of the comparator generates a pass/fail signal that signifies the authenticity of the memory elements, 8.1.2 Boundary Scan DFT STAG or boundary scan is primarily used for testing the board connections, without unplugging the chip from the board. The JTAG controller and surrounding logic also may be generated directly by DC. Boundary-scan insertion is trivial, since the whole process is rather simple and mostly ‘automatic. It is therefore the intent of this chapter to concentrate solely on the scan insertion techniques and issues. Readers are advised to refer to the Design Compiler Reference Manual for boundary scan insertion technique. 8.2 Scan Insertion ‘The scan insertion technique involves replacing all the flip-flops in the design, with special flops that contain built-in logic, solely for testability ‘The most prevalently used architecture is the multiplexed flip-flop. This type of architecture incorporates a 2-input mux at the input of the D-type flip-flop. ‘The select line of the mux determines the mode of the device, i., it enables the mux to be either in the normal operational mode (functional mode with normal data going in) or in the test mode (with scanned data going in), These scan-flops are linked together (using the scan-data input of the mux) to form fa scan-chain, that functions like a serial shift register. During scan mode, a combination of pattems are applied to the primary input, and shifted out through the scan-chain. If done correctly, this technique provides a very high coverage for all the combinational and sequential logic within a chip. Other architectures available along with multiplexed type flip-flops are the Issd structure, clocked scan structure etc. As mentioned above, the most commonly used architecture is the multiplexed flip-flop. For this reason, throughout this section the focus remains on the multiplexed flip-flop type architecture, for DFT scan inser 8.2.1 Making Design Scannable Synopsys provides designers the capability to perform scan irsertion automatically, through its test-ready (or one-pass) compile feature. This technique allows designers to synthesize the design, and map the logic directly to the scan-flops, thus alleviating potential need for post-irsertion adjustments. Companies that do not use Synopsys tools for scan insertion, instead rely on other means to perform the same task. For such a case, replacing the normal flops in the synthesized netlist, with their scan equivalent flops, before linking the scan chains together, performs the scan insertion. It is strongly recommended that the static timing analysis should be performed again on the scan-inserted netlist, since some difference may exist between the characterized timing of the scan flops and their equivalent non-scan (normal) flops. This difference if not corrected may adversely affect the total slack ofthe design. To avoid this . To avoid this problem, library de Scan-flops timing, tothe normal-flops, developers usually specify the Te To enable ae Synopsys {est-ready compile feature, the sean style should be to compilation. On a particu Set. Scan-configuraion command i used to ‘nsoet DC nay te FB edna There are various options available for this commanch ‘het @c_shel1> set_scan_configuration ~style multiplexed_f flop de_she11> compile ~scan de_shel1> preview_scan The a i pate ‘scan command compiles the design directly to scan-flops he daign peor Per i.e., the scan insertion is ‘not performed gn i mapped othe scan flops drely, necator ‘The design at this Point is functionally Correct, ‘at fae ik oe The preview_scan comm: Vs and is used to preview the scan specified through the Set_scan_configuration command, titectures Its reeomm X ‘ecm ha the check test command be used afer compilation, sign for testability related rule viol y Violations. by issuing wari ire fo fir thoes vena Si ings/errors. Failure to fix these vi volains by fx these violat Vanabiy ests in reduced test coverage. The violations may seen ates related issues, encountered during scan insertion, Some of 'Ssues and their Solutions are discussed in the next section,’ OF ESE dc_she11> check_test Itis the designer's res; ‘ponsibility to correct the violations. ‘This ib violations. This is mai achieved by adding exes logic around the “problem® aes o pies OILS IIIS IGE fe to the test logic. In order to fix these problems, modifying the source RTL instead of the netlist is the recommended approach. This approach allows the source RTL to remain as the “golden” database, that may be used for reference at some later stage. On the other hand, if the netlist is modified, then the changes may be forgotten, thus lost, after the design is taped-out. Although, the scan has not yet been inserted, an additional step at this point is to get an estimate of the fault coverage of the design by generating the statistical ATPG test pattems. This step helps quantify the quality of the design, at an earlier stage. If the coverage number is low, then the only option is to identify and fix the areas that need further improvement. However, if the fault coverage number is high then this is an indication to proceed ahead. It must be noted that the fault coverage numbers should be considered as best case only, due to the fact that the design may be part of a larger hierarchy, ie, it may be a sub-block. At the sub-block level, the input port controllability and output port observability may be different when this sub- block is embedded in the full design (top-level). For a full design, this may cause a lower fault coverage number, than expected. The following command is used to generate the statistical test patterns: dc_she11> create_test_patterns sample
Once the problem areas have been identified and fixed in the RTL, the design ready for scan insertion. Using the following command performs the scan ert dc_shel1> insert_scan The insert_scan command does more than just link the scan-flops together to form a sean-chain. It also disables the tr-siates, builds and orders the scan- chains, and optimizes them to remove any DRCs. This command may insert additional test logic, in order to get better control over certain parts of the design, ‘After scan insertion the design should once again be checked for aay rule violations through the check_test command. The report_test commandmay also be utilized to generate all test-related information about the design, Various options exist to control the output of the report. More details can be found in the Design Compiler Reference Manual. 82.2 Test Pattern Generation Upon completion of scan insertion in the design, the test pattems may be generated for the entire design using the following command: dc_she11> create_test_patterns During dynamic simulation, the test pattems are used as input stimuli to the design to exercise all the scan paths. This step should be performed at the full chip level and preferably after layout. 83 DFT Guidelines Obtaining high fault coverage for a design depends on the quality of the implemented DFT logic. Not all designs are ideal. Most “real-world” designs suffer from a variety of DFT related issues, and if left unsolved, result in Teduced fault coverage. This section identifies some of these issues and provides solutions to overcome them. 83.1 Tri-State Bus Contention This is one of the common problems faced by the DFT tool. During scan shifts, multiple drivers on a bus may drive the bus simultaneously, thus causing contention. Fixing this problem requires that only one driver be active at a given time. This can be achieved by adding the decoder logic in the design, which controls the enable input of each tri-state driver through a mux. The mux is used to select between the normal signal (in the functional mode) and the control line from the decoder. The decoder control is selected only during the scan-mode. The decoder inputs are generally controlled directly from the primary inputs, thus providing means to selectively turn-on the tri-state drivers, thereby avoiding contention, 83.2 Latches Avoid using latches as much as possible. Although, latches cover less area than flops, they are difficult to test. Testing maybe difficult but is not entirely impossible. Making them transparent during scan-mode can make them testable. This usually means, adding control logic (for the clock) to each latch. If an independent clock, clocks all the latches, then a single test-logic block may be used to control the clock to make the latches transparent during scan-mode. 8.3.3 Gated Reset or Preset DFT requires that the reset/preset of a flop be controllable. If the resetipreset toa flop is functionally gated in the design, then the flop is un-scannable. To avoid this situation, the reseUpreset signal should bypass the gating logie in scan-mode. A mux is generally used to remedy this problem, with the extemal scan-mode signal functioning as it's select line; and bypass reset/preset signal along with the original gated signal, as its input. 83.4 Gated or Generated Clocks ‘The gated clocks also suffer from the same issue that has been described above for gated resets. DFT requires that the clock input of the lop be controllable. The solution again is to bypass the gating logic through a mux, to make the flop controllable. This problem is prevalent in those designs that contain logic to generate divided clocks. The flop(s) that is used to generate the divided clock should bbe bypassed during scan-mode. The dividing logic in this case may become un-scannable, but the divided clock can be controlled externally, thusproviding coverage for the rest of the design. Small loss of coverage for the dividing logic is offset by the coverage gains achieved for the entire design. D _/) secondary clock clk > scan_mode > Figure 8-1. Bypassing Generated Clock In Figure 8-1, the secondary clock is controlled externally, by using a mux that bypasses the CLK signal in the scan-mode. This provides controllability of the secondary clock, for the rest of the design. Depending upon the type of dividing logic being used, some parts of the logic may be un-scannable. The following command may be used to inform TC to exclude a list of sequential cells while inserting scan: dc_she11> set_scan_element false
83.5 Use Single Edge of the Clock Most designs are coded using a single edge of the clock as reference. However, there may be situations where both the rising and falling edge of the clock is used. Although, this may have no impact on the timing of the functional data, it may create unnecessary timing problems for the scan data, The problem may be avoided by using the same clock edge for the entire design, when the design is in the scan-mode. This is illustrated in the following VHDL example: process(clk, test_mode) begin if (test_mode = ‘1’) then muxed_clk_output <= clk; else muxed_clk_output <= not(cik); end if, end process; ‘The above VHDL code infers a two-input mux. Positive edge of the clock is made use of during scan-mode, while the falling edge of the clock is used during normal mode. 8.3.6 Multiple Clock Domains It is strongly recommended that designer assigns separate scan-chains for each clock domain. Intermixing of clock domains within a scan-chain typically leads to timing problems. This is attributed to the differences in clock skew between different clock domains. A disadvantage to using this technique is that it may lead to varying lengths of scan-chains. An altemative solution is to group all flops belonging to a common clock domain, and connect them serially to form a single scan-chain. This requires the clock skew between the clock domain to be minimal. The clock sources should also be accessible from outside (primary inputs), so that the timing can be externally controlled when the device is tested at the tester. There are other solutions available to this problem. One such solution is to use clock muxing at the clock source, so that only one clock is used during scan-mode, 83.7 Order Scan-Chains to Minimize Clock Skew Presence of clock skew within a scan-chain usually causes hold-time violations, Some designers think that since testing is performed at slower speed as compared to the normal operational speed, the scan-chains cannothave any timing problems. This is a misconception. Only the setup-time is frequency dependent, while the hold-time is frequency independent, ‘Therefore, it is extremely important to minimize clock skews to avoid any hold-time violations in the scan-chain, ‘The scan-chain may be re-ordered with flops having greater clock latency nearer to the source of the scan-chain, while the flops with less clock latency kept farther away. This helps in reducing the clock skew, thereby minimizing the possibility of any hold-time violations. 83.8 — Logic Un-Scannable due to Memory Element As explained earlier, the memory itself can be tested by the use of memory BIST circuitry. However, memory elements (e.g., RAMS) that do not have scan-chains (usually built-in) surrounding them, cause a loss of coverage for the combinational logic present at its inputs and outputs. Figure 8-2, illustrates a case where a memory element is being fed by combinational logic, The logic present at its inputs is being shadowed by the RAM, thus is un-testable. Ifthe inputs to the memory element are not coming directly from sequential elements, then any combinational logic present between the sequential logic and the memory element becomes un-testable. To avoid this situation, one may bypass the RAM in scan-mode. This is achieved by short-circuiting all the inputs feeding the RAM to the outputs of the RAM, through a mux. In scan-mode, the mux enables the short-circuited path and enables data to bypass the RAM. Another problem that typically arises during scan-mode it that the outputs of the memory element are unknown. This typically results in the ‘unknowns’ being introduced to the surrounding scan-chain, causing it to fail. This situation can be avoided by using the bypass method, described above. The ‘unknowns’ generated by the RAM are blocked by the mux present at its outputs. This is because the mux is selected to bypass the RAM; it will therefore prevent the propagation of ‘unknowns’. Untestable Unknown's logic during \ ‘scan-mode al Before implementing bypass logic Data Known Logic during testable ‘scan-mode 4 a = ‘ombinational Logic Combinational Cc After implementing bypass logic Figure 8-2. Memory Bypasstlie soe 84 Chapter Summary DFT techniques are essential to an efficient and successful testing of the manufactured device. By implementing DFT features early in the design cycle, full test coverage on the design may be achieved, thereby reducing the debugging time normally spent at the tester after the device is fabricated. This chapter described the basic testability techniques that are currently in use, including a brief description of memory BIST that is not yet supporte: by Synopsys. repped A detailed description was provided for the scan insertion DFT technique, using the Test Compiler. Various guidelines and solutions were also provided that may help the user to identify the issues and problems related to this technique. LINKS TO LAYOUT AND POST LAYOUT OPTIMIZATION Including Clock Tree Insertion Until now, a virtual wall existed between the front-end and the back-end processes, with the signoff to the ASIC vendor for fabrication, occurring at the structural netlist level. The ASIC vendor was responsible for floorplanning and routing of the design, and provided the front-end designers the resulting delay data. However, this process was inefficient and often resulted in multiple exchanges of the netlist and the layout data between the designers and the ASIC vendor. ‘As we move deeper into the VDSM realm, the virtual wall between the front- end and the back-end is destined to collapse. This is because of the ‘tremendous challenges and difficulties posed by the VDSM technologies. In order to overcome these difficulties, it is becoming evident that greater controllability and flexibility of the ASIC design flow is necessary. This requires total integration between the synthesis and layout processes. This ‘means that designers are now compelled to perform their own layout, Instead of providing the ASIC vendors with the structural netlist, they are now given the physical database for final fabrication.OE This shift in the signoff has resulted in a well-defined interface, between the synthesis tools and Place & Route tools (referred to as layout tools, from here onwards). Synopsys refers to this interface as links to layout or LTL. This chapter describes the interface between DC and the layout tool. Almost all designs require the LTL interface to conduct the post-layout optimizations (PLO). Also, this chapter provides different strategies used for PLO. Furthermore, for successful layout, a section is devoted to clock tree synthesis, as performed by the layout tool. Assuming that the user has synthesized and optimized a design. The design meets all timing and area requirements. Now the question arises, “How close are the estimated wire-load models used for pre-layout optimization, to the actual extracted data from the layout?” The only way to find this information is to floorplan and then route the design. With shrinking geometries, the resistance of the wires is increasing as compared to its capacitance. This results in a large portion of the total delay (Cell delay + interconnect delay) being dominated by the delays associated with the interconnect wires. In order to reduce this effect, designers are forced to spend an increased amount of time floorplanning the chip. Therefore, itis imperative for DC to make use of the physical information, in order to perform further optimizations, Using LTL, one can exchange relevant data (e.g., timing constraints and/or placement information), to and from DC, to the layout tool. This helps DC perform improved post-layout optimizations. It also results in reduced iterations between synthesis and layout 91 Generating Netlist for Layout Most layout tools accept only the Verilog or EDIF netlist format as inputs. Many users, who code the design in VHDL, generate the netlist from DC in EDIF format for layout. Although this format is universal, it does have certain drawbacks. Primarily, the EDIF format is not easily readable; therefore modifying the netlist at a later stage to perform ECO is cumbersome, Secondly, the netlist in EDIF format is not simulatable. Je question is, why should designers route a netlist that cannot be Sooutae What ee FDC generates incorrect netlist (bad logic) dve to Some bug in its EDIF translator? With EDIF, the problem will only be identified at a much later stage, while performing LVS. Therefore, itis recommended that designers generate the netlist from DC in Verilog format, as input to the layout tool. Furthermore, the Verilog format is easy to understand, which considerably simplifies the tsk of modifying the nels, in ease an ECO needs to be performed on the design. In addition, even ifthe testbench for a design is in the VHDL format, one can stil simulate the Verilog netlist by using simulators (currently available) that are capable of simulating a mixture of these languages. r i i out, it ecard tat he lowing pcos be pine one ei facilitate smooth transfer of the design from DC to the layout tool. a) Uniquify the netlist ’b) Simplify netlist by changing names of nets in the design. ‘c) Remove unconnected ports from the entire design. 4) Make sure that all pin names of leaf cells are visible. ©) Check for assign and tran statements. ) Check for unintentional gating of clocks or resets. 2) Check for unresolved references. 9.1.1 Uniquify ‘As mentioned previously, the netlist must be uniquified in DC, in order to perform clock tre synthesis during layout. This operation generates a urique modulefetity definition fora sub-blok that is instantiated multiple times in the design. This may seem like an unnecessary operation that reduces theeee readability of the netlist, and results in increased size of the netlist. However, physically the design is considered flat by most layout tools. In other words, the blocks referenced multiple times, although ideal in all respects, exist Physically at separate locations. Furthermore, flops present inside these blocks also need to be connected to the clock source. This makes it obvious that separate clock-net names are required for connecting the clock tree to these blocks. Non-uniquified netlists pose a problem during clock tree transfer from the layout tool to DC. The problem only occurs, if the clock tree information alone is transferred to DC (through methods described later), that does not involve a complete netlist transfer from the layout tool to DC. In this case, only one module/entity definition for the multiple instanced blocks is present in the netlist, for a non-uniquified design. This causes a problem when the clock tree information is transferred back to DC, ie., modifying the design database in DC, to include the buffers and additional ports in the sub- blocks The problem is that two distinct net names (outputs of clock tree) cannot connect to the same port of a single module/entity. Uniquifying the design Solves the above problem. However, it also causes the netlist to increase in size, since it creates separate module/entity definition for each instantiation of the block. Some users prefer to uniquify the netlist as they traverse the hierarchy to reach the top-level, while others uniquify the whole chip at once, from the top-level. The recommended approach is to remove the dont_touch attribute from all sub-blocks of the design, before uniquifying the netlist. The following command may be used to remove the dont touch attribute from the entire design, before uniquifying the netlist from the top level: dc_she11> remove_attribute find(-hierarchy design, “") dont_touch dc_shel1> uniquify 9.1.2 Tailoring the Netlist for Layout Some layout tools may have difficulty reading the Verilog nets, hat contains unusual net names. For instance, DC sometimes produces signal ames with “eell™” or “*-rtur” appended or n-between the nam os times, users may find that some net names (ox por names) have leading o \derscores. Also, most layout tools Thaxngum numberof charecers fora act of @ ort name, Depening on is restrictions imposed by the specific layout tok it is possible forthe use fo clean the netlist within DC, before writing tout. This ability of DC po ‘a smooth interface to the layout tool while meeting all tool requirem 1] names, the user must yrevent DC from generating the undesirable signal fos deine rules, and then instruct DC to conform to these niles writing out the netlist. For instance, one may define rules calle including the following in the “.synopsys_de.setup” file: : . \ rules BORG -allowed "A-Za-z0-9_" ae ear restricted “_" -last_restricted *_" : -max_ength 30 \ map { (cell, “mycell’, {"*-return”, "myreturn"}} Instructing DC to conform to the above rule (BORG) is performed at the ‘command line (or through a script), by using the following comman dc_shel1> change_names -hierarchy -rules BORG In addition to the above, users may also desire to alter the bus naming sole in the netlist. DC provides a variable through which the user aialaes 2 tailor the naming style of the busses, written out in the netlist. The vara may again be set in the setup file as follows: bus_naming_style = %s[%d]9.1.3 Remove Unconnected Ports Many designs suffer from the problem of ports of a block that are left unconnected intentionally, or maybe due to legacy reasons. Although, this practice has no affect on DC in terms of producing functionally correct. netlist, however, some designers prefer to remove these ports during synthesis. This is generally a good practice since, if left uncorrected, DC will issue a waming message regarding the unconnected ports. Because a design ‘may contain many such unconnected ports, itis possible that a real warming may get lost between the numerous unconnected ports wamings, It is therefore preferable to remove the unconnected ports and check the design, before generating the netlist. The following commands perform this: de_she11> remove_unconnected_ports find(-hierarchy cell, “*") dc_she11> check_design O14 Vis le Port Names Generally, all synthesized designs result in mapped components that have one (or more) of their output ports not connected to a net. When DC generates a Verilog netlist, it does not write out the unconnected port names. Depending upon the layout tool, a mismatch might occur between the number of ports in the physical cell versus the number of ports of the same cell present in the netlist. For example, a D flip-flop containing 4 ports namely, D, CLK, Q and QN, may be connected as follows: DFF dff reg (.D(data), .CLK(m_clock), .Q(data_out) ); In the above case, DC does not write out QN the port, since the function of the inverting QN output is not utilized in the design. Physically, this cell contains all 4 ports, therefore, when the netlist is read in the layout tool, a mismatch between the number of ports occurs. Setting the value of the following variable to true in the setup file can prevent this mismatch verilogout_show_unconnected_pins = true i ident on the layout tool’s Making the port names visible is solely depen« requirements. However, recently some layout tool vendors upon realizing this limitation have improved their tools so that the above restriction is not imposed. 9.1.5 Verilog Specific Statements ing the netlist that contains tri wires, Some layout tools have difficulty reading the net r tran primitives and assign statements. These are Verilog specific primitives and statements that are generated in the netlist for many possible reasons. DDC generates wires for designs containing “inut” type pons. For designs containing these types of ports, DC needs to assign values to the bi diretional port thus producing tr wire statement and iran primitives. To prevent DC from generating these, users may use the following IO variable in the setup file. When set to true, all tri-state nets are declared 2s wire instead of ir. verilogout_no_tri = true ve jors influence the generation of the assign statements. Feosthroughs in the design are considered as one such facto: The feedthroughs may occur if the block contains an input port that is directly connected to the output port of the same block. This results in DC generating an assign statement in the Verilog netlist. Also the assign statements get generated, if an output port is connected to ground, or is being driven by @ constant (e.g., 1°bO or 1°b1). While writing out the netlist in Verilog format, DC issues a warming, stating that the assign statements are being written out. In case of the feedthroughs, the user can prevent DC from generating these statements by inserting a bufer between the previously connected inpt and cutput por. This isolates the input port ftom the output por, thereby breaking the feedthrough. To perform this, the following variable can be us before compiling the design. dc_shel1> set_fix_multiple_port_nets feedthroughs‘The ~buffer_constants option may also be used in the above variable in order to buffer the constants driving the output port. However, since there are many other variations that may produce the assign statements, it may be safer to use the following for full coverage: dc_she11> set_fix_multiple_port_nets -all —buffer_constants ‘Many designers complain that assign statements get generated in the netlist, even after all the steps described above have been performed. In almost all ceases this is caused by the dont_touch attribute present on a net without the users’ knowledge. The user can find the presence of this attribute by performing a report_net command. The dont_touch attribute on the net can be removed from the net by using the following command: dc_shel1> remove_attribute find(net,
) dont_touch 9.1.6 al Clock or Reset Gating It is always a good idea to check and double-check the clocks in the design before handing the netlist over for place and route. Remember that the clock provides the reference for all signals ic., all signals are directly related to the clock and are optimized with respect to it. If the clock is unintentionally buflered (maybe you forgot to apply a set_dont_touch_network attribute on it) it will affect clock latency and skew, which may result in the user not being able to meet the set timing objectives. Generally, resets are not considered as important as clocks, However, since the set_dont_touch_network attribute is also applied for them, it is wise to cheek their buffering. To check for unintentional gating for the clocks, the user may use the following command: de_she11> report_transitive_fanout ~clock_tree To check for unintentional gating of another signal (say, the reset signal), you may use the ~from option in the above command. For example: dc_shel> report_transitive_fanout —from reset Obviously, the clocks should be defined before the ~clock_tree option can bbe used. Alternatively, one may also use the ~from option for the clocks. ‘This does not require the clocks to be defined first. Note that the ~from and the —clock_tree option cannot be used simultaneously. 9.1.7 Unresolved References Designers should exercise caution and always check for any unresolved references. DC issues a warning for a design containing instantiations of a block that does not have a corresponding definition. For example, block A is the top level module that instantiates sub-block B. If you fail to read the definition of block B in DC while writing out the netlist for block A, DC will generate a warning stating that block A contains unresolved references. Also, this message is issued for cases where a port mismatch occurs between the instanced cell and its definition. 9.2 Layout With a clean and optimized netlist, the user is ready to transfer the design to its physical form, using the layout tool. Although, layout is a complex process, it can be condensed to three basic steps, as follows: a) Floorplanning, ») Clock tree insertion ©) Routing the database. 9.2.1 Floorplanning This is considered to be the most critical step within the entire layout process. Primarily a design is floorplanned in order to achieve minimum possible area, while still meeting timing requirements. Also, floorplannitg is performed to divide the design into manageable blocks.In a broad sense, floorplanning consists of placement of cells and macros (eg, RAMs and ROMs or sub-blocks) in their proper locations. The objective is to reduce net RC delays and routing capacitances, thereby producing faster designs. Placing cells and macros in proper locations also helps produce minimum area and decrease routing congestion, Almost all designs undergo the floorplanning phase, and time should be spent trying to find the correct placement location of the cells. Optimal placement improves the overall quality of the design. It also helps in reduced synthesis-layout iterations. For small and/or slow designs the floorplanning may not be as important, as that for large and/or timing critical designs consisting of thousands of gates (>150K). For these designs, it is recommended that a hierarchical placement and routing of the design be performed. For example, a sub-block has been placed and routed, meeting all timing and area requirements, The sub-block is subsequently brought in as a fixed macro inside the fall design, to be routed with the rest of the cells or macros. 921.1 Timing Driven Placement Finding correct locations of cells and macros is time consuming, since each pass requires full timing analysis and verification. If the design fails timing requirements, it is re-floorplanned. This obviously is a time consuming and often frustrating method. To alleviate this, the layout tool vendors have introduced the concept of timing-driven placement, more commonly referred to as timing-driven layout (TDL). ‘The TDL method consists of forward annotating the timing information of the design generated by DC, to the layout tool. When using this method, the physical placement of cells is dictated by the timing constraints. The layout tools gives priority to timing while placing the cells, and tries not to violate the path constraints. DC generates the timing constraints in SDF format using the following ‘command: PODER EDP AAODLN ISDE, OINID ASD DE IAD ISISEE IEEE ISIOLLEEI ISIS nS write_constraints format
a ~cover_design —from
to
through
output
The above command generates the constraints file in SDF format. Both versions 1.0 and 2.1 are supported. Ifthe layout tool does not support the 2.1 version, then the user may always use the default version 1.0 by specifying “sdf” instead of “sdf-v2.1”. ‘The write_constraints command provides many more options in addition to the one “illustrated in the above example, however, the use of =cover_design option is more prevalent. The —cover_design option instructs DC to output just enough timing constraints so as to cover the worst path through every driver-load pin pair in the design. For adcitional information regarding this command and its options, the user is advised to refer to the DC reference manual. A timing constraint file in SDF version 2.1 format generated by DC with the ~cover_design option, is illustrated in Example 9.1. The SDF file contains the TIMINGCHECK field containing PATHCONSTRAINT for all the paths in a design. The last field of the PATHCONSTRAINT timingcheck contains three sets of numbers that define the path delay for a particular path segment. The three numbers, although the same in this example, correspond to ‘minimum, typical, and maximum delay values. These numbers and their corresponding paths gover the placement of cells during layout. Example 9.1 (DELAYFILE (SDFVERSION "OVI 2.1") (DESIGN “hello") (DATE "Mon Jul 20 22:59:49 1998") (VENDOR "Enterprise") (PROGRAM "Synopsys Design Compiler cmos")oe EEE (VERSION "1998.02~ (DIVIDER /) (VOLTAGE 2.70:2.70:2.70) (PROCESS "TYPICAL") (TEMPERATURE 95.00:95.00:95.00) (TIMESCALE Ins) (CELL (CELLTYPE "hell (INSTANCE) (TIMINGCHECK (PATHCONSTRAINT INPUT] U7SI/A3_U751/ZN 754/11 UTS4/ZN REGO (1.523:1.523:1.523) ) (PATHCONSTRAINT INPUT2 U744/Al U744/Z, U7AS/AL UT4S/ZN REGI/D (1.594:1.594:1.594) ) (PATHCONSTRAINT REGI/CLK REGI/Q U737/ U737/ZN OUTPUT! ( 3.000:3.000:3.000) ) (PATHCONSTRAINT REG2/CLK REG2/Q UII31/A2 ULIBI/ZN REG3/D (25.523:25.523:25.523) ) It must be noted that depending upon the size of the design, the generation of the timing constraints for the entire design may take a considerable amount of time, Constraints may be generated for selected timing-critical paths (using —from, —to and —through options) in order to avoid this problem. Altematively, users may perform hierarchical place and route, where small sub-blocks are routed first, utilizing the TDL method. Hierarchical place and route is a preferred approach, since it is based upon the “divide and conquer” technique. Dividing the chip into small manageable blocks makes it relatively simpler for designers to tackle the run-time problems, An altemative approach of performing TDL. is to let the layout tool generate the timing constraints based upon the boundary conditions, top-level constraints and timing exceptions of the design. This is a tool dependent feature and supported by some layout tool vendors, it may not be supported PADIS ENA LAEISDSS A UINEI ASD OE LI NIDE NII LILIES Aen by others. The layout tool uses its own delay calculator to find out the timing constraints for each path in the design in order to place cells. This method is far superior than the others described previously in the sense that this method is considerably faster, however, a major drawback with this approach is that users are now compelled to use and trust the delay calculator of the layout tool. In any case, timing convergence can be achieved with relative ease, using this approach, Performing TDL may also have an impact on the overall area. One may find ‘that the area increases when the above approach is used. However, this point is debatable with some users insisting that the total area gets reduced because of the nubber-band effect caused by the TDL method, while others swear by the opposite. 9.2.1.2, Back Annotation of Floorplan Information Total integration with the back-end tools allows DC to perform with increased efficiency, in order to achieve timing and area convergence. DC makes use of several formats that allow the layout information to be read by DC. For post-layout optimization, itis necessary for DC to know the physical location of each sub-block. Using the physical design exchange format (PDEF) grants DC access to this pertinent information. The PDEF file contains the cluster (physical grouping) information and location of cells in the layout, Pre-placement, the netlist is optimized using the wire-load models, spread across the logical hierarchy. However, physical hierarchy may be different than the logical hierarchy. Physically, the cell’macros may be grouped depending on the pad locations or some other consideration. Therefore, it is imperative for DC to receive the physical placement information, for it to more effectively optimize the design. This is done by re-adjusting the application of the wire-loads on the design, based upon the physical hierarchy. DC uses the following command to read the physical placement information generated by the layout tool in PDEF format:read_clusters design
Once the netlist has been re-optimized, the physical information may be passed back to the layout tool through the PDEF file, The following ‘command in DC, performs this task: write_clusters ~design
—output
9.2.1.3 Recommendations a) In general TDL performs well on all types of designs. However, definitely use TDL for timing critical, and/or high-speed designs, in order ‘to minimize synthesis-layout iterations and achieve timing convergence. ») When handling large designs, generate timing constraints only for selected nets. This will save you a considerable amount of time. However, if your layout tool is capable of generating its own timing constraints, then it should be given preference over the other approach, in order to save time. ©) Perform hierarchical place and route for large designs. Although tedious, it will generally provide you with best results as well as better control of the overall flow. Hierarchical place and route also expedites hand editing of netlist that is sometimes required after routing is completed, © Always use physical placement information in PDEF format while performing post-layout optimization within DC, especially for large hierarchical designs. : 9.2.2 Clock Tree Insertion As explained in previous chapters, itis essential to control the clock latency and skew. Although, some designs may actually take advantage of the Positive skew to reduce power, most designs however, require minimal clock skew and clock latency. Larger values of clock skew cause race conditions that inerease the chance of wrong data being clocked in the flops. Controlling the skew and latency requires alot of effort and foresight. ‘As mentioned before, the layout tool performs the clock tree synthesis (CTS for short). The CTS is performed immediately after the placement of the cells, and before routing these cells. With input from the designer, the layout tool determines the best placement and style of the clock tree. Generally, designers are asked for the number of levels along with the types of buffers used for each level of the clock tree. Obviously, the number of levels is dependent on the fanout of the clock signal In a broad sense, the number of levels of the clock tree is inversely proportional to the drive strength of the gates used in the clock tree. In other words, you will need more levels, if low drive strength gates are usec, while the number of levels is reduced if high drive strength gates are used. To minimize the clock skew and clock latency, designers may find the following recommendations helpfull. It must be noted that these recommendations are not hard and fast rules. Designers often resort to using a mixture of techniques to solve the clocking issues. a) Use a balanced clock tree structure with minimum number of levels possible. Try not to go overboard with the number of levels. The more the levels, the greater the clock latency. b) Use high drive strength buffers in large clock trees. This also helps in reducing the number of levels. ©) In order to reduce clock skew between different clock domains, try balancing the number of levels and types of gates used in each clock tree. For instance, if one clock is driving 50 flops while the other clock is driving 500 flops, then use low drive strength gates in the clock tree of the first clock, and high drive strength gates for the other. The idea here is to speed-up the clock driving 500 flops, and slow down the clock that is driving 50 flops, in order to match the delay between the two clock trees. 4) If your library contains balanced rise and fall buffers, you may prefer to use these instead. Remember, in general it is not always true that the) 2) hy OE balanced rise and fall buffers, are faster (Iess cell delay) than the normal buffers. Some libraries provide buffers that have lower cell delays for rise times of signals, as compared to the fall times. For designs utilizing the positive edge trigger flops, these buffers may be an ideal choice. The idea is to study the library and choose the most appropriate gate available. Past experience also comes in handy. To reduce clock latency, you may try to use high drive inverters for two levels. This is because, logically a single buffer cell consists of two inverters connected together, and therefore has an cell delay of two inverters. Using two separate inverters (two levels) will achieve the same function, but will result in reduced overall cell delay ~ since you are not using another buffer (2 more inverters) for the second level. Use this approach, only for designs that do not contain gated clocks. The reason for this explained later (point h), Do not restrict yourself to using the same type and drive strength gate for CTS. Current layout tools allow you to mix and match, For a balanced clock tree (¢.g., 3 levels), the first level is generally a single buffer driven by the Pad. In order to reduce clock skew, the first level buffer is placed near the center of the chip, 50 that it can connect to the next level of buffers, through equal interconnect wires. This creates a ring like structure with the first buffer in the center, with the second set of buffers (second level) surrounding it, and the last stage surrounding the second level. Thus, the distance between the first, second and the third level are kept at minimum. However, although a good arrangement, it does result in the first level buffer being placed farthest from the source (Pad). If a minimum size wire is used to route the clock network from the Pad source to the first buffer, it will result in a large RC delay that will affect the clock latency. Therefore, it is necessary to size-up (widen) this wire from the Pad source to the input of the buffer (first level), in order to reduce the resistance of the wire, thereby reducing the overall latency. Depending upon the size of your design and the number of levels, you ‘may also need to perform this operation on other levels In order to minimize the skew, the layout tool should have the ability to tap the clock signal, from any level of the clock tree. This is especially important for designs that contain gated clocks. If the same clock is used for other ungated flops, then it results in additional delay, hence the skew. If the clock tree ended at the gate, the additional delay will cause a large skew between the gated-clock flop and the ungated-clock flop as shown in Figure 9-1(a). Therefore it is necessary to tap the clock source from a level up for the gated-clock flop, while maintaining the full clock tree for the ungated clock flop, as illustrated in Figure 9-1 (b), However, i inverters are used in the clock tree (point e), then the above approach breaks down. In this case, do not use inverters as part of the clock tre. org Hoe —_—_— Tepped a a ® Figure 9-1. CTS for Gated Clocks versus Non-gated Clockseee 9.2.3 Transfer of Clock Tree to Design Compiler Clock tree synthesis done by the layout tool modifies the physical design (cells are added in the clock network). This modification is absent from the original netlist present in DC. Therefore, it is necessary for the user to accurately transfer this information to DC. There are several ways to do this. 8) Generally all layout tools have the capability to write out the design in EDIF or Verilog format. Since, everything may appear flat to the layout tool, designers may receive a flat netlist from the layout tool. Of course, this netlist will contain the clock tree information, but the enormous size of the netlist itself may be daunting and unmanageable. Furthermore, due to the absence of original design hierarchy, the flat netlist is not easily readable. Another problem with this approach is that the user is now forced to designate this netlist as the “golden” netlist, meaning that all verification (LVS etc.) has to be performed against this netlist. Doing this is comparable to “digging your own grave” because, if the layout tool botches the layout, the same anomalies will be reflected in the netlist. Of course, the LVS will pass without flagging any errors, since the user is checking physical layout data against layout generated netlist ie., performing LVL instead of LVS. An altemative is to perform formal verification between the original hierarchical netlist and the layout ‘generated flat netlist. This certainly is a viable approach, but has its own limitations regarding the size and complexity of the design. Most formal verification tools suffer from this limitation, i., they excel in individual block verification, but fall short in full chip verification. This is especially ‘rue for verifying flat netlists against hierarchical netlists. b) The second approach is to only transfer, point-to-point clock delay information, starting from the clock source to its endpoints (clock pins of the flops). The delay calculator of the layout tool will perform this task, and upon instruction will provide the designer, the point-to-point timing information of the clock tree in SDF format. Designers may back annotate this SDF file to the original design in order to determine the clock latency and skew. This method does not require the clock tree to be transferred to DC from the layout tool. However, this approach has its own pitfalls. Primarily, this approach does not allow the usage of SPEF data from the layout for back annotation to PT. Furthermore, the designer is now compelled to trust the delay calculator of the layout tool ie., another variable has been introduced that requires qualification. Since, the layout libraries are separate from Synopsys libraries, in order to get the same delay numbers, the timing numbers present in the Synopsys library need to match exactly to that of the layout library. The dilemma of verifying the original netlist against the layout database still exists, especially since the original netlist does not contain the extra cells and nets due to clock tree insertion. However, one can certainly find work-arounds and may use this approach successfully. ©) A solution to all of the above problems is to creatively transfer the entire clock tree to DC without changing the hierarchy of the design. Some layout tools may even generate Synopsys scripts that contain de_shell commands like, disconnect_net, create_cell, create_port and connect_net. These commands on execution insert the clock tree into the original design database in DC, while still maintaining the hierarchy. Of course, one needs to verify the resulting modified netlist against the original netlist by performing formal verification. Since the design hierarchy is not altered, the formal verification runs smoothly. 4) Another solution involves brute force modification. Generally the layout tools, upon completion of CTS, produce a summary report of all changes ‘made to the design. One may take advantage of this report and parse it to retrieve the relevant information (¢.g., name of clock tree insertion points, type and name of buffers etc.) using scripting languages like Perl or Awk. Once the information is gathered, the original Verilog netlist may be directly modified without going through DC. The modified netlist should be read back into DC to check for any syntax errors. In addition, the modified netlist should also be formally verified against the criginal netlist. Recently, upon realizing this problem the layout tool vendors have facilitated this process by generating the hierarchical netlist from the layout database. This netlist contains the clock tree information and should be verified formally against the original netlist. Upon successful verification, the netlist may be declared as “golden”924 Routing After the clock tree insertion, the final step involves routing the chip. In a broad sense, the routing is divided into two phases: 1. Global routing, and 2. Detailed routing. The first routing phase is called the global route, in which the global router assigns @ general pathway through the layout for each net. During global route, the layout surface is divided into several regions. The global router decides the shortest route through each region in the layout surface, without laying the geometric wires. The second routing phase is called the detailed route, The detailed router makes use of the information gathered by the global route and routes the geometric wires within each region of the layout surface. It must be noted that if the run-time of global route is long (more than the placement run-time), it indicates a bad placement quality. In this case, the placement should be performed again with emphasis on reduced congestion. 92.5 Extraction Until now, synthesis and optimization was performed utilizing the wire-load models. The wire-load models are based on statistically estimating the final routing capacitances. Because of the statistical nature of wire-load models, they may be completely inaccurate compared to the real delay values of the routed design. This variation between the wire-load models and the real delay values results in an non-optimized design. The layout database is extracted to produce the delay values necessary to further optimize the design. These values are back annotated to PT for static {iming analysis, and to DC for further optimization and refinement of the sign. 9.2.5.1 What to Extract? In general, almost all layout tools are capable of extracting the layout database using various algorithms. These algorithms define the granularity and the accuracy of the extracted values. Depending upon the chosen algorithm and the desired accuracy, the following types of information may be extracted: 1. Detailed parasitics in DSPF or SPEF format. 2. Reduced parasitics in RSPF or SPEF format. 3, Net and cell delays in SDF format 4, Net delay in SDF format + lumped parasitic capacitances. ‘The DSPF (Detailed Standard Parasitic Format) contains RC information of each segment (multiple R’s and C’s) of the routed netlist. This is the most accurate form of extraction. However, due to long extraction times oa a full design, this method is not practical. This type of extraction is usually limited to critical nets and clock trees of the design. ‘The RSPF (Reduced Standard Parasitic Format) represents RC delays in terms of a pi model (2 C’s and | R). The accuracy of this model is less than that of DSPF, since it does not account for multiple R’s and C’s associated with each segment of the net. Again, the extraction time may be significant, thus limiting the usage of this type of information. Target applications are critical nets and small blocks of the design. Both detailed and reduced parasitics can be represented by OVI's (Open Verilog Intemational) Standard Parasitic Exchange Format (SPEF). ‘The last two (number 3 and 4) are the most common types of extraction used by the designers. Both utilize the SDF format. However, there is major difference between the two, Number 3 uses the SDF to represent both the cell and net delays, whereas number 4 uses the SDF to represent only :he net delays. The lumped parasitic capacitances are generated separately. Some layout tools generate the lumped parasitic capacitances in the Synopsys set_load format, thus facilitating direct back annotation to DC or PT.It is worth mentioning that PT can read all five formats (DSPF, RSPF, SPEF, SDF and set_load), whereas, DC can only read the SDF and set_load file formats. The SDF and set_load file formats are not as accurate as the DSPF or RSPF types of extraction, however, the time to extract the layout database is significantly reduced. For most designs this type of extraction provides sufficient accuracy and precision. However, as suggested, only critical nets and clocks in the design should be targeted for DSPF or RSPF types of extraction For the layout tool to generate a full SDF (number 3 approach), it uses its own delay calculator to compute the cell delays that are based upon the output loading and the transition time of the input signal. However, there is a flaw in using this approach. The synthesis was done using DC that used its own delay calculator to optimize the design. By choosing to use the full SDF generated by the layout tool, we are now introducing another variable that needs qualification. How do we know that the delay calculator used by the layout tool is inore accurate than the one used by DC? Also, upon back annotation of the full SDF to PT, the full capability of PT is also not being utilized. This is because the cell delays are already fixed in the SDF file, and performing case analysis in PT will not yield accurate results, even if the conditional delays are present in the SDF file. This is discussed at length in Chapter 10. Another problem exists with the above approach. Since only the cell and net delays are back annotated, DC does not know the parasitic capacitances associated with each net of the design. Therefore, when performing post- layout optimization, DC can only make use of the wire-load models to make incremental changes to the design, thus defeating the whole purpose of back ‘annotation. However, if the fourth approach was used (net delays in SDF format + lumped parasitic capacitances), DC makes use of the net loading information during post-layout optimization (e.g., to size up/down gates). ‘To avoid these problems, it is recommended that only the net RC delays (also called as interconnect wiring delays) and lumped parasitic capacitances are extracted from the layout database. Upon back annotation, DC or PT uses its own delay calculator to compute the cell delays, based upon the back annotated interconnect RC’s and capacitive net loading, To summarize, it is recommended that the following types of information should be generated from the layout tool for back annotation to DC in order to perform post layout optimization: a) Net RC delays in SDF format. b) Capacitive net loading values in set_load format For static timing analysis, using PT, the following types of information can be generated: a) Net RC delays in SDF format. ) Capacitive net loading values in set_load forma ©) Parasitic information for clock and other critical nets in DSPF, RSPF or SPEF file formats. 9.2.5.2 Estimated Parasitic Extraction The extraction of parasitics at the pre-route level (after global routing) provides a closer approximation to the parasitic values of the final routed design. If the estimates indicate a timing problem, it is fairly easy to quickly re-floorplan the design before starting the detailed route. This method reduces synthesis-layout iterations and avoids wastage of valuable time. ‘The difference between the estimated extracted delay values after the global routing and the real delay values after the detailed routing is minimal. In contrast, the estimated delay values between the floorplan extraction and detailed route extraction may be significant. Therefore it is prudent that after floorplanning, cell placement and clock tree insertion, the design be globally routed, before extracting the estimated delay numbers. A complete extraction flow is shown in Figure 9-2. If major timing violations exist after global route, it may be necessary to re-optimize the design within DC with estimated delays back annotated. However, if the timing violations are not severe then re-floorplanning (and/or re-placement of cells) the design may achieve the desired result. The detailed routing should be performed, only after the eliminating all timing violations produced after the global routing phase.Major Timing Violations ‘Synthesis and Optimization ¥ Floorpianning, Placement and Clock Tree insertion ¥ Global Routing ¥ Extract Estimated Delays No No Yes ¥ Detailed Routing ——_+ Extract Real Delays Figure 9-2, Routing and Extraction Flow Minor Timing Violations EE j i i ‘ 9.2.5.3 Real Parasitic Extraction ‘A full extraction (actual values ~ no estimation) is performed after the design has been satisfactorily globally routed, ie, no DRC violations and achievement of required die-size. This by far is the most critical part of the entire process. The final product may not work, if the extracted values are not accurate, With technolegies shrinking to 0.18 micron and below, the extraction algorithm of the layout tool need to take into account the second and third order parasitic effects. Any slight deviations of these values may cause the design to fail, Consider a case where the extracted values are too pessimistic. Static timing analysis indicates that the signals are meeting hold-time requirements. However, in reality, the signals are arriving faster, causing real hold-iime violations, but due to pessimistic back annotated parasitic capacitances, the design is passing static timing. The case for setup-time is similar, if the extracted values are too optimistic. 9.3 Post-Layout Optimization Post-layout optimization is performed to further optimize and refine the design. The process involves back annotating the data generated by the layout tool, to the design residing in DC. Depending upon the severity of violations, the optimizations may include full synthesis or minor adjustments through the use of in-place optimization (IPO) technique. As explained in the previous section, the layout related data suitable for back annotation to DC are: a) Net RC delays in SDF format b) The set_load file, containing capacitive net loading. ©) Physical placement information in PDEF format.9.3.1 Back Annotation and Custom Wire Loads The next step involves analyzing the static timing of the design. Designers may choose to perform this step using PT or DC's intemal static timing analysis engine. In any case, post layout optimization can only be performed within DC therefore the layout data needs to be back annotated to both DC and PT. Depending on the process technology, the layout tool may generate two separate files that correspond to the worst and the best case. If there are two separate Synopsys libraries pertaining to each case, then back annotate the worst case layout data to the design using the worst case Synopsys library. Similarly, best case layout data should be back annotated to the design ‘mapped to best case Synopsys library. Some vendors provide only one Synopsys library that covers all cases, ie., the library is characterized for TYPICAL case, with the WORST and the BEST case values derived (derated) from the TYPICAL case. In a situation like this, it is recommended that the designer back annotate the worst case humbers to the design with operating conditions set to WORST, in order to perform the worst case timing analysis. The best case timing analysis should be performed with best case timing numbers back annotated to the design with operating conditions set to BEST. Use the following de_shell commands to back annotate layout-generated information to the design present in DC, before performing post-layout optimization, c_shel1>current_design _
de_shel1> include -quiet __
de_she11> read_timing -f sdf
dc_shel1> read_clusters
Use the following pt_shell commands to back annotate layout-generated information to the design in PT, before performing static timing analysis. pt_shel1> current_design _—_
pt_shel> source
pt_she11>read_sdf
pt_shell>read_parasitics
After back annotation in PT, if the design fails static timing with substantial amount of violations, the user may need to perform re-synthesis (or even re- code certain blocks). Therefore, it is prudent to use the existing layout information for re-synthesis. Discarding the layout data during re-synthesis, only wastes the time and effort spent for layout. Furthermore, the layout data is helpful in fine tuning the design. To achieve the maximum benefit, custom wire-load models should be generated from DC, using the existing layout information. The resulting gate level netlist using the custom wire-load models provide a closer match to the post-layout timing results. Use the following de_shell command to create custom wire-load models: create_wire_load —design
cluster
trim
percentile
output
Although, there are other options available for the above command, generally the ones listed above suffice for most designs. The trim value is used to discard data that falls below a certain value, while the percentile value is used to calculate the average value. By altering the percentile value, one may add optimism or pessimism in the custom wire-load models. The cluster name is obviously the grouping name that was used during layout, to group cells or blocks together. After the creation of the custom wire-load models (CWLM), the library should be updated to account for the new CWILMS. This is because the original technology library contains only the generic wire-load models that are not particular to a specific design. To use the CWLMs that were‘generated by the above command, the library must be updated. The following, ‘command may be used to update the library present in DC memory: update _lib
It must be noted that the above command does not alter or overwrite the source library. It only updates the DC memory to include the new CWLMs. 9.3.2 In-Place Optimization For designs with minor timing violations after layout, there is no need to perform a full chip synthesis. In-place optimization or IPO is an excellent method to fine-tune a design, in order to climinate these violations. The concept of IPO is to keep the structure of the design intact while modifying only the failing parts of the design, thereby having a minimal impact on the existing layout. IPO is commonly used to add/swap gates at specific locations to fix setup and/or hold-time problems. ‘The IPO is library dependent and can be limited to perform only the following: a) Resize cells. b) Insert or delete existing cells (mainly buffers). Usually, all Synopsys technology libraries have an attribute defined that enables or disables the IPO. The attribute and its value, that enables IPO in a library is: in_place_swap_mode : match footprint Along with the above library level attribute, all cells in the Synopsys library also have the cell_footprint information. For example, two cells with same functionality, but different drive strengths may have the same cell_footprint value. This means that the two cells have identical physical coverage area, therefore replacing one for the other will not impact the existing layout, ic., adjacent cells will not shift. However, this restriction, along with cell sizing, area optimization and buffer insertion, is controlled by the use of the following variables: compile_ignore_footprint_during_inplace_opt = true | ‘alse compile_ok_to_buffer_during_inplace_opt = true | false compile_ignore_area_during_inplace_opt compile_disable_area_opt_during_inplace_opt Using these variables allows the designer the ability control the amount of ‘changes made in the design. The appropriate values of the above variables may be set before performing IPO at de_shell command line; or in the Synopsys setup file, The IPO is invoked by using the following commands: dc_she11> compile -in_place dc_she11> reoptimize_design —in_place Both commands are similar in nearly all respects, ie., both make use of the back annotated layout information, with the exception of physical information. The reoptimize_design makes use of the physical location formation while re-optimizing the design. Another difference between the two commands is that the “compile -in_place” command uses the library wire-load models during IPO, whereas the “reoptimize_design —in_place” command makes use of the custom wire-load models. Therefore, it is imperative, that the latter command be used while performing IPO. It must be noted that the reoptimize_design when used on its own makes major modifications to the design. To eliminate this possibility, always use the -in_place option. 93.3 Location Based Optimization Location Based Optimization or LBO is an integral part of IPO, and is invoked automatically while performing IPO for designs containing back annotated physical placement location information in PDEF format. In thisLE chapter the LBO is being identified separately from IPO due to its importance and additional capability. Performing IPO with LBO improves the overall optimization of the design, since DC now has access to the cell placement information. This allows DC to apply more powerful algorithms during optimization, Consider a path segment starting from primary input and ending at a flop. Post-layout timing analysis reveals a hold-time problem for this path. As shown in Figure 9-3, LBO optimization will add buffer(s) near the flop (endpoint), instead of adding it at the source (startpoint). Adding buffer(s) at the source may cause a setup-time failure for another path originating from the same source. setup requirement bewwen int and outt = Sns, Actual cera ° cut ol / > ou Buen shou rt t paler Butfer(s) should votton, bebo Deplaes tere int and shin ne between nt rd ou Figure 9-3. IPO versus LBO In addition to the enhanced capability of inserting buffers at optimal locations, LBO also provides better modeling of cross cluster nets, and the new nets that were created to connect the inserted or deleted buffers. This due to the fact that, DC is aware of the location where the cells were inserted or deleted. In order for the “reoptimize_design —in_place” to perform LBO, the following variables need to be set to true; in addition to the IPO related variable “compile_ok_to_buffer_during_inplace_opt”: Ibo_buffer_removal_enabled = true Ibo_buffer_insertion_enabled = true LBO is not enabled for buffer insertion or removal, if the value of above variables is set to false (default). Location information is disregarded for this case, with only IPO algorithms used to handle the buffer insertion or deletion. ‘The changes made by performing IPO and/or LBO, using both “compile ~in_place” or “reoptimize_design —in_place”, can be written out toa file by using the following de_shell variable: file name> reoptimize_design_changed_list_file_name = If the file already exists, the new set of changes will be appended to the same file. 9.3.4 Fixing Hold-Time Violations Nearly every design undergoes the process of fixing hold-time violations, especially for faster technologies. Designers tackle this problem using various approaches. For this reason, a separate section is devoted to discuss issues arising from using these methods, and to consolidate them under one topic. Most designers synthesize the design with tight constraints in orcer to maximize the setup-time. The resulting effect is a fast logic with data arriving faster at the input of the flop, with respect to the clock. This may_— OE result in hold-time violations due to data changing value before being latched by the flop. Generally, designers prefer to fix the hold-time violations after initial placement and routing of the design, thereby making use of more accurate delay numbers, Removing hold-time violations involves delaying the data with respect to the clock, so that the data does not change for a specified amount of time (hold- time) after the arrival of the clock edge. There are several methods utilized by designers to insert the appropriate delays, as outlined below: a) Using Synopsys methodology. ») Inserting delays manually. ©) Inserting delays automatically, by using brute force d¢_shell commands. 9.3.4.1 Synopsys Methodology Synopsys provides the following de_shell command, which enables the compile command to fix the hold-time violations: set_fix_hold
dc_shel1> set_fix_hold CLK The above command may be used during initial compile (or post-layout), by setting the min/max library concurrently (version DC98 onwards), and specifying the min/max values for set_input_delay command. The idea behind setting the min/max library at the same time is to eliminate the two- pass (initial synthesis for maximum setup-time and re-optimization to fix hold-time violations) synthesis needed for almost all designs. Example 9.2 illustrates the methodology of fixing the post-layout hold-time violations using the single pass synthesis approach Example 9.2 set_min_library “
" \ =min_version “
" set_operating_conditions -min BEST -max WORST include net_delay.set_load read_timing interconnect sdf read_clusters floorplan.pdef set_input_delay max 20.0 -clock CLK {IN1 IN2} set_input_delay —min 1.0 —clock CLK {IN1 IN2} set_output_delay -max 10 clock CLK all_outputs() set_fix_hold CLK reoptimize_design —in_place Alternatively, the design may be compiled with maximum setup-time using the worst case library, followed by re-optimization after layout, in order to fix the hold-time violations by mapping the design to the best case library. Although, this method uses the two-pass synthesis approach, it is still recommended because of its stable nature. Most designers prefer to use this approach because of the time and effort that has been invested in defining and maturing this methodology. It must be noted that the above command is independent of IPO commands. ‘The IPO commands are generally used to fix hold-time violations after initial layout with layout information back annotated to the design. Fixing pre~ layout hold-time violations is accomplished by compiling the design incrementally, using the “compile —incremental” command. The set_fix_hold command instructs DC to fix the hold-time violations by inserting the buffers at appropriate locations. This again is controlled by the TPO related variables described previously. With buffer insertion disabled,SE the cells in the data path can only be swapped (i.e., replacing a higher drive strength gate with a lower drive strength gate) to increase the cell delay, thereby delaying the data arriving at the flop input, 9.3.4.2 Manual Insertion of Delays If the timing analysis reveals a very small number of hold-time violations (less than 10 to 20 places), it may not be worthwhile to fix these violations using the set_fix_hold command. The delays in this case may be manually inserted in the netlist. The designer may chain a string of buffers to delay the data, with respect to the clock just enough, so that it passes the hold-time checks. A point to note however is that a chain of buffers may not be able to provide adequate delay, since the delay is dependent on the placement of the buffers in the layout. Generally, the buffers will be placed very close to each other therefore the total delay will be the sum of the delay through each cell, The interconnect delay itself will be insignificant, due to the close proximity of the placed cells. To overcome this, it is recommended to link a number of high-fanin gates (e.g., 8 input AND gate) with all inputs tied together, and connected to form a chain. The advantage of using this approach is that now the input pin capacitance of the high fanin gates is being utilized. The input pin capacitance of @ high fanin gate is usually much larger than a single input buffer. Therefore, this method of delaying the data provides a solution that is independent of the cell placement location in the layout. This approach is suitable if the technology library does not contain delay cells. If these cells are present, then they should be targeted to fix the hold-time violations, 9.3.4.3 Brute Force Method This is a unique method, but requires expertise in scripting languages like Perl or Awk. For instance, if the timing report shows many hold-time violations, and fixing them through Synopsys methodology means large run times. In this case, an altemative approach, to find the amount of slack (setup-time analysis) and the corresponding violation (hold-time analysis) of the ‘ailing paths, is to parse the timing report (both for worst-case and best case) using a scripting language. Using these numbers, the user may generate dc_shell commands like, disconnect_net, create_cell and connect_net, for the failing paths. Upon execution of these commands in de_shell, it will force DC to insert and connect buffers at appropriate places (should be done near the endpoints of the failing paths). This is called a brute force method, done automatically. ‘This by no means is a clean approach, but works remarkably well. The time taken to’ fix hold-time violations using this approach is negligible as compared to the Synopsys methodology. 9.4 Future Directions The future holds considerable challenges to the EDA industry. The problem is compounded by the fact that layout tool vendors lack synthesis techrology while Synopsys lacks the back-end expertise. Nearly all back-end tool vendors provide interface to Synopsys (through LTL), but total integration is missing, Time-to-market is rapidly shrinking while design complexities are increasing. The problem is further aggravated by shrinking geometries, forcing ASIC designers to think about power and cross-talk along with timing, much earlier in the design cycle. The exchange of data between layout tools and DC (and PT) is certainly not efficient. Time wasted during synthesis-layout iterations is still a major bottleneck. In future, especially for VDSM technologies, the back-end tool vendors will be left with no other choice but to incorporate the synthesis technology. Many loyal Synopsys users will still prefer to perform synthesis and pre- layout optimizations of their designs using Synopsys tools, however, the post-layout optimization may move into the layout tools arena, provided they gain the synthesis know-how. Post-layout optimization frequently involves insertion and/or deletion of cells, in order to fix the timing problems. Therefore, the layout tools have a built-inmee advantage over Synopsys, as they do not need the PDEF information to find the placement location of each cell, thus the exchange of data back and forth. between the layout tools and DC is eliminated. Furthermore, almost all layout tools incorporate their own delay calculator, which further simplifies the process of swapping, adding, or removing of the cells, in order to fix timing problems. Synopsys, upon realizing these problems is currently in the process of introducing a completed flow to layout, termed as physical synthesis, which includes tight integration of synthesis with the place and route engine. This may provide a major change in the current methodology. Most designers may prefer this approach in order to maintain the golden nature of their netlist, rather than letting the layout tools handle the post-layout optimization. Only time will tell. 9.5 Chapter Summary Links to layout is an important part of the integration between the layout tool and DC. This chapter focussed on all aspects of exchanging data to and from layout tools, in order for DC to perform better optimization and fine-tuning the design. Issues related to transfer of clock tree information from the layout tool to DC were explained in detail. Cross checking the netlist generated by the layout, tool against the original netlist remains a major bottleneck. Various altematives were provided to the user in order to overcome this issue and choose the right solution. Starting from how to generate a clean netlist from DC in order to minimize layout problems, this chapter covered placement and floorplanning, clock tree insertion, routing, extraction, and post-layout optimization techniques, including various methods to fix the hold-time violations. At each step, recommendations are provided to facilitate the user in choosing the right direction. As we move towards VDSM technologies, a future trend towards total integration between synthesis and back-end tools is also discussed. 10 SDF GENERATION For Dynamic Timing Simulation ‘The standard delay format or SDF contains timing information of all the cells, in the design. It is used to provide timing information for simulating the gate- level netlist. ‘As mentioned in Chapter 1, verification of the gate-level netlist of a design through dynamic simulation is not a recommended approach. Dynamic simulation method is used to verify the functionality of the design at the RTL level only. Verification of a gate-level design using dynamic simulation depends solely on the coverage provided by the test-bench, therefore, certain paths in the design that are not sensitized will not get tested. In contrast, formal verification techniques provide superior validation of the design. The dynamic simulation method for gate-level design verification is still dominant, and is used extensively by designers. Due to this reason, tis chapter provides a brief description on generating the SDF file from DC ant PT, which can be used to perform dynamic timing simulation of the design. Furthermore, a few innovative ideas and suggestions are provided 1° facilitate designers in performing successful simulation.Please note that some of the information provided in this chapter is also described in previous chapters. Since, this is an important topic, it is deemed necessary that a full chapter relating to SDF generation be devoted for the sake of clarity and completeness. 10.1 SDF File ‘The SDF file contains timing information of each cell in the design. The basic timing data comprises of the following: a) IOPATH delay b) INTERCONNECT delay. ©) SETUP timing check. 4) HOLD timing check. Following, is an example SDF file that contains the timing information for two cells (sequential cell feeding an AND gate), along with the interconnect delay between them: (DELAYFILE (SDFVERSION “OVI 2.1") (DESIGN “top_level") (DATE “Dec 30 1997") (VENDOR “std_cell_ib") (PROGRAM “Synopsys Design Compiler cmos") (VERSION “1998.08") (DIVIDER /) (VOLTAGE 2.70:2.70: (PROCESS ‘WORST’) (TEMPERATURE 100.00:100.00:100.00) (TIMESCALE ‘ns) (CELL (CELLTYPE “top_level”) (INSTANCE) (DELAY (ABSOLUTE 70) (INTERCONNECT sub1/U1/Q sub1/U2/A1 (0.02:0.03:0.04) (0.03:0.04:0.05)) ) ) ) (CELL (CELLTYPE “dff1") (INSTANCE sub1/U1) (DELAY (ABSOLUTE (IOPATH CLK Q (0.1:0.2:0.3) (0.1:0.2:0.3)) ) ) (TIMINGCHECK (SETUP (posedge D) (posedge CLK) (0.5:0.5:0.5)) (SETUP (negedge D) (posedge CLK) (0.6:0.6:0.6)) (HOLD (posedge D) (posedge CLK) (0.001:0.001:0.001)) (HOLD (negedge D) (posedge CLK) (0.001:0.001:0.001)) ) ) (CELL (CELLTYPE “and2”) (INSTANCE sub1/U2) (DELAY (ABSOLUTE (IOPATH A1 Z (0.16:0.24:0.34) (0.12:0.23:0.32)) (IOPATH A2 Z (0.11:0.21:0.32) (0.17:0.22:0.34)) ) ) ) ) ‘The IOPATH delay specifies the cell delay. Its computation is based upon the output wire loading and the transition time of the input signal ‘The INTERCONNECT delay is a path based, point-to-point delay, which accounts for the RC delay between the driving gate and the driven gaté- Thiswire delay is specified from the output pin of the driving cell to the input pin of the driven cell. The SETUP and HOLD timing checks contain values that determine the required setup and hold-time of each sequential cell. These numbers are based upon the characterized values in the technology library. 10.2 SDF File Generation ‘The SDF file may be generated for pre-layout or post-layout simulations. The post-layout SDF is generated from DC or PT, after back annotating the extracted RC delay values and parasitic capacitances, to DC or PT. The post- layout values thus represent the actual delays associated with the design. The following commands may be used to generate the SDF file: DC Command write_timing format sdf-v2.1. -output
PT Command write_sdf —version [1.0 or 2.1]
Note: By default, PT generates the 2.1 version of SDF. 10.2.1 Generating Pre-Layout SDF File The pre-layout numbers contain delay values that are based upon the wire- load models. Also, the pre-layout netlist does not contain the clock tree. Therefore, it is necessary to approximate the post-route clock tree delays while generating the pre-layout SDF. In order to generate the pre-layout SDF, the following commands approximate the post-route clock tree values by defining the clock delay, skew, and the transition time. ISIS 199° DC Commands create_clock -period 30 -waveform {0 15} CLK. set_clock_skew delay 2.0 CLK set_clock transition 0.2 CLK PT Commands cteate_clock ~period 30 -waveform [list 0 15] fist CLK] set_cloct_tatency 2.0 [get_clocks CLK] set_clock_transition 0.2 [get_clocks CLK] By setting (fixing) these values as illustrated above, designers may assume that the resulting SDF file will also contain these values, i.., the clock delay from the source to the endpoint (clock input port of the flops) is fixed at 2.0 However, this is not the case. DC only uses the above commands to perform static timing analysis, and does not output this information to the SDF file. To avoid this problem, designer should force DC to use the specified delay value instead of caleulating its own. To ensure the inclusion of 2.0ns as the clock delay, a de_shell command (explained later in the section) is used to “massage” the resulting SDF. At the pre-layout level, the clock transition time should also be specified Failure to fix the transition time of the clock results in false values computed for the driven flops. Again, the culprit is the missing clock tree at the pre- layout level. The absence of clock tree forces the assumption of high fanout for source clock, which in tum causes DC to compute slow transition times for the entire clock network. The slow transition times will affect the driven flops (endpoint flops), resulting in large delay values computed for them.cuK — ; : i hs / 1 lo lps Buffer or to Pad Liye Figure 10-1. Specifying Clock Tree Delay Consider the diagram shown in Figure 10-1. The dotted lines illustrate the placement of clock tree buffers, synthesized during layout. At pre-layout level, these buffers do not yet exist. However, there usually is a buffer/cell (shown as shaded cell) at the clock source. This cell may be a big driver, instantiated by the designer with the sole purpose of driving the future clock. tree, or it may simply be an input pad. Let us assume that this is an input pad (called CLKPAD) with input pin A and output pin Z. At the pre-layout level, the output pin Z connects directly to all the endpoints. The easiest way to fix the SDF, so that it reflects the 2.0 ns clock delay from the source “CLK” to all the endpoints, is to replace the delay value of the shaded cell (from pin A to pin Z) calculated by DC, with 2.0 ns. This can be achieved by using the following de_shell command: dc_shel1> set_annotated_delay 2.0-cell \ —from CLKPAD/A —to CLKPAD/Z © Note: A similar command also exists for PT. DLAI a“ The above command replaces the value calculated by DC, with the one specified i.e, 2.0 ns. This delay gets reflected in the SDF file in the form of IOPATH delay, for the cell CLKPAD, from pin A to pin Z. Fixing the delay value of the clock solves the problem of clock latency. However, what happens to the delay values of the driven flops? Designers may incorrectly assume that DC uses the specified clock transition for the sole purpose of performing static timing analysis, and may not use the specified values to calculate delays of the driven flops. This is not so. DC uses the fixed transition value of the clock to calculate delays of driven gates. Not only are the transition values used to perform static timing analys's, but they are also used while computing the delays of the driven cells. Thus the SDF file contains the delay values that are approximated by the designer at the pre-layout phase. 10.2.2 Generating Post-Layout SDF File ‘The post-layout design contains the clock tree information. Therefore, all the steps that were needed to fix the clock latency, skew, and clock transition time, during pre-layout phase, are not required for post-layout SDF file generation. Instead, the clock is propagated through the clock network to provide the real delays and transition times. As explained in Chapter 9, only the extracted parasitic capacitances and RC delays should be back annotated to DC or PT, for final SDF generation. The following commands may be used to back annotate the extracted éata to the design and specify the clock information while generating the post-layout SDF file for simulation: DC Commands read_timing —format sdf
include ~quiet
create_clock -period 30 -waveform {0 15} CLK set_clock_skew —propagated CLKPT Commands read_sdf
Source
read_parasitics
create_clock ~period 30 -waveform [list 0 15] [list CLK] set_propagated_clock [get_clocks CLK] 10.2.3 Issues Related to Timing Checks Sometimes, during simulation, unknowns (X's) are generated that cause the simulation to fail. These unknowns are generated due to the violation of setup/hold timing checks. Most of the time, these violations are real, however there are instances where a designer may want to ignore some violations related to parts of the design, but still verify others. This is generally unachievable, due to the simulator’s inability to turnoff the X-generation on a selective basis. Nearly all simulators provide capabilities to ignore the timing violations, generally for the whole design. They do not have the ability to ignore the timing violation for an isolated instance of a cell in the design. Due to this reason, designers are often forced to either modify the simulation library or live with the failed result. Modifying the simulation library is also not a viable approach, since tuming off the X-generation can only be performed on a cell. This cell may be instanced multiple times in the design. Tuming off the X-generation for thi cell will prevent the simulator from generating X's, for all the instances of the cell in the design. This is definitely not desired as it may mask the real timing problems lying elsewhere in the design. For example, a design may contain multiple clock domains and the data traverses from one clock domain to the other through synchronization logic. Although, this logic will work perfectly on a manufactured device, it may cause hold-time violations when simulated. This will cause the simulation to fail for the design. Another example is related to the type of methodology used for synthesis. Some designers prefer to fix the hold-time violations only after layout Failing to falsify or remove the hold-time values from the pre-layout SDF file may cause the simulator to generate an X (unknown) for the violating flop. This X may propagate to the rest of the logic causing the whole simulation to fail. To prevent these problems, one may need to falsify selectively, the value of the setup and hold-time constructs in the SDF file, for simulation to succeed. The SDF file is instance based (rather than cell based), therefore selective targeting of the timing checks is easily attained. Instead of manually removing the setup and hold-time constructs from the SDF fil, a better way is to zero out the setup and hold-times in the SDF file, only for the violating flops, ic., replace the existing setup and hold-time numbers with zero's. Back-annotating the zero value for the setup and hold-time to the simulator prevents it from generating unknowns (if both setup and hold-time is zero, there cannot be any violation), thus making the simulation run smoothly. The following de_shell command may be used to perform thi dc_shel1> set_annotated_check 0 -setup —hold \ 7 7 —from REG1/CLK \ -to REG1/D © Note: A similar command also exists for PT. 10.2.4 False Delay Calculation Problem This topic is covered in Chapter 4, but is included here for the sake of completeness. The delay calculation of a cell is based upon the input transition time and the output load capacitance of a cell. The input transition time of a eel is evaluated, based upon the transition delay of the driving cell (previous cell). If the driving cell contains more than one timing arc, then the worst transitionON ‘time is used, as input to the driven cell. This causes a major problem when ‘generating the SDF file for simulation purposes. Consider the logic shown in Figure 10-2. The signals, reset and signal_a are inputs to the instance Ul. Let us presume that the reset signal is not eritical, while the signal_a is the one that we are really interested in. The reset signal is a slow signal therefore the transition time of this signal is more compared to signal_a, which has a faster transition time. This causes, two transition delays to be computed for cell Ul (2 ns from A to Z, and 0.3 ns from B to Z). ‘When generating SDF, the two values will be written out separately as part of the cell delay, for the cell Ul. However, the question now arises, which of the two values does DC use to compute the input transition time for cell U2? DC uses the worst (maximum) transition value of the preceding gate (U1) as the input transition time for the driven gate (U2). Since the transition time of rreset signal is more compared to signal_a, the 2ns value will be used as input transition time for U2. This causes a large delay value to be computed for ccell U2 (shaded cell). reset signal_a Figure 102. False Delay Caleulation To avoid this problem, one needs to instruct DC, not to perform the delay calculation for the timing arc, from pin A to pin Z of cell Ul. This step should be performed before writing out the SDF. The following de_shell command may be used to perform this: dc_shel1> set_disable_timing U1 —from A -to Z Unfortunately, this problem also exists during static timing analysis. Failure to disable the timing computation of the false path leads to large delay values computed for the driven cell. 10.2.5 Putting it Together The following DC scripts combines all the information provided above and may be used to generate the pre and post-layout SDF, to be used for timing simulation of an example tap controller design. DC script for pre-layout SDF generation active_desig tap_controller read active_design + *.db" current_design active_design link, set_wire_load LARGE -mode top set_operating_conditions WORST create_clock -period 33 -waveform {0 16.5} tck set_clock_skew -delay 2.0 tck set_clock_transition 0.2 tck set_driving_cell -cell BUFF1 pin Z all_inputs() set_drive 0 {tck trst} set_load 50 all_outputs() set_input_delay 20.0 -clock tck-max all_inputs() set_output_delay 10.0 -clock tok -max all_outputs() /* Approximate the clock tree delay */ set_annotated_delay 2,0 cell “from CLKPADIA \ —to CLKPADIZ/* Assuming, only REGI flop is violating hold-time */ set_annotated_check 0-setup -hold \ from REG1/CLK write_timing —format sdf-v2.1._\ ~output active_design + “.sdf" DC seript for post-layout SDF generation active_design = tap_controller read aclive_design + *.db” current_design active_design link set_operating_conditions BEST include capacitance.de /* actual parasitic capacitances */ read_timing re_delays.sdf /* actual RC delays */ create_clock period 33 -waveform {0 16.5} tck set_clock_skew ~propagated tck set_driving_cell ~cell BUFF 1 -pin Z all_inputs() set_drive 0 {tek trst} set_load 50 all_outputs() set_input_delay 20.0 -clock tck—max all_inputs() set_outpul_delay 10.0 clock tck -max all_outputs() -to REG1/D \ t EE EI LEI IIS aur /* Assuming, only REGI flop is violating hold-time */ set_annotated_check 0-setup—hold “from REGIICLK to REG1/D write_timing format sdf-v2.1 \ output active_design + “sdf” 10.3 Chapter Summary ‘The SDF file is used exhaustively throughout the ASIC world to perform dynamic timing simulations. The chapter briefly summarizes the contents of the SDF file that is related to ensuing discussions, The chapter also discusses procedures for generating the SDF file from DC and PT, both for pre-layout and post-layout simulations. Along with command description, various helpful techniques are described to “massage” the SDF, in order for the simulation to succeed. These include fixing the clock latency and clock transition at the pre-layout level, and avoiding unknown propagation from selective logic of the design for successful simulation. The final section gathered all the information and put it together in the form of DC scripts for pre and post-layout SDF generation11 PRIMETIME BASICS PrimeTime (PT) is a sign-off quality static timing analysis tool from Synopsys. Static timing analysis or STA is without a doubt the most important step in the design flow. It determines whether the design works at the required speed. PT analyzes the timing delays in the design and flags violation that must be corrected. PT, similar to DC, provides a GUI interface along with the command-line interface. The GUI interface contains various windows that help analyze the design graphically. Although the GUT interface is a good starting point, most users quickly migrate to using the command-line interface. Therefore, the intent of this chapter is to focus solely on the command-line interface of PT. This chapter introduces to the reader, the basics of PT including a brief section devoted to Tel language that is used by PT. Also described in this chapter are selected PT commands that are used to perform successful STA, and also facilitate the designer in debugging the design for possible timing, violations.SIERO” 11.1 Introduction PI is a stand-alone tool that is not integrated under the DC suite of tools. It is a separate tool, which works alongside DC. Both PT and DC have consistent commands, generate similar reports, and support common file formats. In addition PT can also generate timing assertions that DC can use for synthesis and optimization. PT's command-line interface is based on the industry- standard language called Tel. In contrast to DC’s internal STA engine, PT is faster, takes up less memory, and has additional features. 11.1.1 Invoking PT PT may be invoked in the command-line mode using the command pt_shell or in the GUI mode through the command primetime. Command-line mode: > pt_shell GUI-mode: > primetime 11.1.2 PrimeTime Environment Upon invocation PT looks for a file called “.synopsys_pt.setup” and includes it by default. It first searches for this file in the current directory and failing that, looks for it in the users home directory before using the default setup file present in the PT installation site. This file contains the necessary setup variables defining the design environment tht are used by PT, exemplified low: set search_path [list . /usr/golden/library/std_celis] setlink_path [list {*} ex25_worst.db, ex25_best.db] AAD ANNEES BAGS aa ‘The variable search_path defines a list containing directories to Icok into while searching for libraries and designs. It saves the tedium of typing in complete file paths when referring to libraries and designs. ‘The variable link_path defines a list of libraries containing cells to be used for linking the design. These libraries are searched in the directories specified in the search_path. In the above example there are three elements in the list defined by link_path variable. The “** indicates designs loaded in the memory, while the other two are names pertaining to the best and the worst case, standard cell technology libraries. Another commonly used method of setting up the environment, if we do not want to use the “.synopsys_pt.setup” file is to use the source command. The source command works just like DC’s include command. It includes and runs the file as if it were a script within the current environment. This ‘command is invoked within pt_shell. For example: pt_shel1> source ex25.env 11.1.3 Automatic Command Conversion Most of the DC commands are similar to PT commands, the exception being that PT being Tel based uses the Tel language format. This promotes the need for DC commands to be converted to the Tel format before PT can utilize them. PT offers a conversion script that may be used to convert almost all de_shell commands to pt_shell, Tel based format. The script is called transcript and is provided by Synopsys as a separate stand-alone utility. This script is executed from a UNIX shell as follows: > transcript
oe nine teal 11.2 Tel Basics Tel provides most of the basic programming constructs — variables, operators, expressions, control flow, loops and procedures etc. In addition, Tel also supports most of the UNIX commands, Tel programs are a list of commands. Commands may be nested within other commands through a process called command substitution, Variables are defined and values assigned to them using the set command. For example: set clock_name clk set clock_period 20 Values can be numbers or strings ~ Tel does not distinguish between variables holding numeric values versus variables holding strings. It ‘automatically uses the numeric value in an arithmetic context. In the above examples a variable clock_name has been set to the string clk and a variable clock period has been set to the numeric value 20. Variables are used by preceding the variable name with a $. Ifthe $ is not added before the variable name, Tel treats it as a string create_clock $clock_name period 20 -waveform [0 10] Arithmetic operations are performed through the expr command. This useful technique provides means for global parameterization across the entire script. expr $clock_period /5 The above command retums a value 4. Tel provides all the standard arithmetic operators like *, /, +, ~, etc. ES TEN 213 11.2.1 Command Substitution Commands return a value that can be used in other commands. Tkis nesting of commands is accomplished with the square brackets { and ]. When a command is enclosed in square brackets, it is evaluated first and its value substituted in place of the command. For example: set clock_period 20 set inp_del [expr $clock_period / 5] The above command first evaluates the expression in the square brackets and then replaces the command with the evaluated value — in this case inp_del inherits the value of 4. Commands may also be nested to any depth. For example: set clock_period 20 set inp_del [expr [expr Sclock_period /5] + 1] The above example has 2 levels of nesting. The inner command retums a value of 4 obtained by dividing the value of clock_period by 5. The outer ‘expr command adds | to the result of the inner command. Thus inp_del gets set toa value of 5. 11.2.2 Lists Lists represent a collection of objects — these objects can be either strings or lists. In the most basic form enclosing a set of items in braces creates list. set clk_list {olk1 clk2 clk3} In the above example, the set command creates a list called clk list with 3 elements, clkd, clk2 and clk3.ae een Another method of creating lists is through the list command that is typically used in command substitution. For example, the following list command creates a list just like the previous set command list clk1 clk2 clk3 The list command is suitable for use in command substitution because it returns a value that is a list. The set command example may also be written as: set clk_list [list clk1 clk2 clk3] Tel provides a group of commands to manipulate lists — concat, join, lappend, lindex, linsert, list, llength, Irange, Ireplace, Isearch, Isort and split. For example, concat concatenates two lists together and retums a new list set new list [concat [list clk1 clk2] [list clk3 clk] } In the above example the variable new_fist is the result of concatenation of the list clk? de clk2 and the list clk3 & clk4, each of which is formed by the list command. Conceptually a simple list is a string containing elements that are separated by white space. In most cases the following two are equivalent {olk4 clk2 clk3} olk4 clk2 olk3” In some cases the second representation is preferred because it allows variable substitution while the first representation does not. For example: set stdlibpath [list "/usr/lib/stdlib25" “usr/lib/padlib25"] set link_path “/projectbigdeallib $stdlibpath” For more details about the syntax of other list commands refer to any standard book on Tel language. 11.2.3. Flow Control and Loops Like other scripting and programming languages Tel provides if and switch commands for flow control. It also provides for and while loops for looping. ‘The if command may be used, along with else or elsif statements to completely specify the process flow. The arguments to if, elsif and else statements are Gsually lists, enclosed in braces to prevent any substitution, For example: if {Sport == “clk"} { create_clock -period 10 waveform [list 0 5] $port } elsif (Sport == “clkdiv2") { create_generated_clock ~divide_by 2 ~source clk Sport } else { echo “Sport is not a clock port” } 11.3 PrimeTime Commands PT uses similar (but not same) commands as DC, to perform timing analysis and related functions. This is because of the difference in formats between DC and PT. A PT provided conversion script called transcript (explained ‘earlier in the chapter) is used to convert the d¢_shell commands to the PT ‘equivalent. Since all relevant de_shell commands are explained in detail in Chapter 6, comprehensive explanation is not provided in this section for all related commands. 11.3.1 Design Entry Unlike DC, which can read RTL source files through HDL Compiler, PT being the static analysis engine can only read mapped designs. This performs the basis of design entry to PT. Among others, input to PT can be a file in db, Verilog, VHDL or EDIF format. The following pt_shell commands appropriate to each format are used to read the design in PT:alien tele read_db -netlist_only
db #4 format read_verilog
sv ‘verilog format read_vhdl
svhd #vhdl format read_edif
edf ‘#EDIF format Since the netlist in db format can also contain constraints and/or environmental attributes (maybe saved by the designer), the -netlist_only option may be used for the read_db command to instruct PT to load only the structural netlist. This prevents PT from reading the constraints and/or other attributes associated with the design. Only the structural netlist is loaded. 11.3.2 Clock Specification The concepts behind clock specification remains the same as the ones described for DC in Chapter 6. Subtle syntax differences exist due to difference in formats between the two. However, because clock specification may become complex, especially if there are intemally generated clocks with clock division, this section will cover the complete PT clock specification techniques and syntax. 113.21 Creating Clocks Primary clocks are defined as follows: create_clock —period
waveform {
pt_shel1> create_clock -period 20 —waveform {010} \ flist CLK] PEL ILO EIS ‘The above example creates a single clock named CLK having a period of 20ns, with rising and falling edges at Ons and 10ns respectively. 113.22 Clock Latency and Clock Transition The following commands are used to specify the clock latency and the clock transition. These commands are mainly used for pre-layout STA and are explained in detail in Chapter 12. set_clock_latency —
set_clock latency 2.5 [get_clocks CLK] pt_shell> set_clock transition 0.2 [get_clocks CLK] The above commands define the clock latency for the CLK port as 2.5ns with a fixed clock transition value of 0.2ns. 113.23. Propagating the Clock Propagating the clock is usually done after the layout tool inserts the clock tree in the design, and the netlist is brought back to PT for STA. The clock is propagated through the entire clock tree network in the netlist in order to determine the clock latency. In other words, the delay across each cell in the ‘clock tree and the interconnect wiring delay between the cells is taken into account. The following command instructs PT to propagate the clock through the clock network: set_propagated_clock
pt_shel> set_propagated_clock {get_clocks CLK]~~ eee 11.3.2.4 Specifying Clock Skew Clock skew, or clock uncertainty as Synopsys prefers to call it, is the difference in the arrival times of the clock, at the clock pin of the flops. In synchronous designs data gets launched by the flop at one clock edge and is, received by another flop at another clock edge (usually the next clock edge). If the two clock edges (launch and receive) are derived from the same clock then ideally there should be an exact delay of one clock period between the two edges. Clock skew puts a crimp in this happy situation. Because of variation in routing delays (or gated clock situation) the receiving clock edge may arrive early or late. Early arrival could cause setup-time violations and late arrival may cause hold-time violations. Therefore, it is imperative to specify the clock skew during the pre-layout phase, in order to produce robust designs. Clock skew is specified through the following command: set_clock_uncertainty
from
to
setup shold
In the following example, 0.6ns is applied to both the setup and hold-time of the clock signal, CLK. pt_shel1> set_clock_uncertainty 0.6 [get_clocks CLK] The option -setup may be used to apply uncertainty value to setup-time checks and while ~hold option applies the uncertainty value for hold-time checks. It must be noted that different values for setup and hold cannot be implemented within a single command. Two separate commands must be used for this purpose. For example: pt_shel1> set_clock_uncertainty 0.5 —hold [get_clocks CLK] pt_shel1> set_clock_uncertainty 1.5 -setup [get_clocks CLK] ‘Also inter-clock skew can be specified with the from and —to options, which is useful for designs containing multiple clock domains. For example: pt_she11> set_clock_uncertainty 0.5 —from [get_clocks CLK1] \ —to [get_clocks CLK2] 113.25 Specifying Generated Clocks This is an important feature that is absent from DC. Very often a design may contain internally generated clocks. PT allows the user to define the relationship between the generated clock and the source clock, through the command create_generated_clock. This is convenient because pre-layout scripts can be used for post-layout with minimal changes. During post-layout timing analysis, clock tree is inserted and the clock latency is calculated by propagating the clock signal through the clock tree buffers. Users may opt to define the divided clock independent to the source clock (by defining the clock on an output pin of the dividing logic sub- block). However, this approach forces designers to manually add the clock tree delay (from the dividing block to the rest of the design) to the clock latency of the source clock to the dividing logic block. By setting up a divided clock through the above command, the two clocks are kept in sync both in pre-layout and post-layout phases. create_generated_clock -name
“source
~divide_by
pt_shel1> create_generated_clock -name DIV2CLK \ source CLK ~divide_by 2\ blockA/DFF1X/Qeee ‘The above example creates a generated clock on pin Q of the cell DEFIX belonging to blockA. ‘The name of the generated clock is DIV2CLK, having half the frequency of the source clock, CLK. 11.3.2.6 Clock Gating Checks For low power applications, designers often resort to gating the clock in the design. This technique allows designers to enable the clock only when needed. The gating logic may produce clipped clock or glitches, if the setup and hold-time requirements are not met (for the gating logic). PT allows designers to specify the setup/hold requirements for the gating logic, as follows: set_clock_gating_check -setup
hold
pt_shell> set_clock_gating_check -setup 0.5 —hold 0.01 CLK The above example informs PT that the setup-time and hold-time requirement for all the gates in the clock network of CLK is 0.Sns and 0.01ns respectively. Gating checks on an isolated cell can be accomplished by specifying the cell name in the object list. For example: pt_shel1> set_clock_gating_check ~setup 0.05 hold 0.01 \ [get_lib_cell stdce!l_lib/BUFF4X] By default, PT performs the gating check with zero value used for setup and hold times — unless the library contains specific values for setup and hold times for the cell used to gate the clock. If the gating cell contains the setup/hold timing checks, then the gating check values may be automatically derived from the SDF file. The clock gating checks are only performed for combinational cells. Also, the gating checks cannot be performed between two clocks. 11.3.3 Timing Analysis Commands This section describes a selected set of PT commands that are used to perform STA. Only the most commonly used options are listed for these ‘commands, ~ set_disable_timing: Applications of this command include disabling timing arc of a cell in order to break the combinational feedback loop, ar to instruct PT to exclude a particular timing arc (thus the path segment) from analysis. set_disable_timing -from
to
pt_shel1> set_disable_timing -from A1 —to ZN {INVD2} iming: iming arcs that report_disable_timing: command is used to display the timing ares t were disabled by the user; or by PT. The report identifies individual disabled paths, using the following flags: Flags: u_ : Timing path disabled by the user. 1: Timing loop broken by PT. ¢ : Timing path disabled during case analysis. set_input_transition: is an altemative to the set_driving_cell ‘command. It sets a fixed transition time is not dependent on the net loading. This command is specified on input/inout ports of the design. set_input_transition
set_input_transition 0.2 [all_inputs} pt_she11> set_input_transition 0.4 [list int in2]et_timing_derate: is used to derate the delay numbers shown in the ing report. PT provides this powerful capability that is useful in adding . extra timing margin to the entire design. The amount of deration is ~ report_case_analysis: command is used to display the case analysis controlled by a fixed value, which is specified by the user. The original values set by the user. PT displays a report that identifies the pin/port list delay numbers are multiplied by this value, before the timing report is along with the corresponding case analysis values. generated. pt_shel1> report_case_analysis ‘set_timing_derate -min
~max
iing_derate ~min 0.2 -max 1.2 = report_timing: Similar to DC, this command is used to generate the timing report of path segments in a design. This command is used extensively and provides ample flexibility that is helpful in focussing explicitly on an individual path, or on a collection of paths in a design. pt_shel1> set_t ~ set_case_analysis: command performs case analysis and is one of the ‘most useful feature provided by PT. This command is used to set a fixed logic value to a port (or pin) while performing STA, report_timing -from
to
through
set_case_analysis [01]
—Nets —capacitance -transition_time -max_paths
-nworst
Pt_shel1> set_case_analysis 0 scan_mode The from and —to options facilitate the user in defining a path for analysis. Since there may be multiple paths leading from a startpoint to a single endpoint, the -through option may be used to further isolate the Application of this command includes disabling timing paths that are not valid during a particular mode of operation. For instance, in the above example, the scan_mode port switches the design between the functional mode (normal operation) and the test mode of operation. The zero value set required path segment for timing analysis. on the scan_mode port is propagated to all the cells driven by this port. This results in disablement of certain timing arcs of all cells that are related pt_shell>report_timing —from [all_inputs] \ to the scan_mode por. Since testability logic is usually non timing-critical, ~to [all_registers —data_pins] disabling the timing arcs of the non timing-critical paths causes the real timing-critical paths to be identified and analyzed. The usage of this pt_shel1>report_timing —from int . command is further explained in Chapter 12. cto blockA/subBicarry_regt/D\ through blockA/mux1/A1 ~ femove_case_analysis: command is used to remove the case analysis ‘The ~delay_type option is used to specify the type of delay to be reported values set by the above command. at an endpoint. Accepted values are max, min, min_max, max_rise, max_fall, min_rise, and min_fall. By default PT uses the max type, which reports that the maximum delay between two points. The min type option is used to display the minimum delay between two points, The max type is used for analyzing the design for setup-time while the min type is, remove_case_analysis
pt_she1l>remove_case_analysis scan_modeused to perform hold-time analysis. The other types are not frequently used and users are advised to refer to the PT User Guide for fall explanation regarding their usage. pt_shel1> report timing from all_registers -clock_pins] \ to [all_registers ~data_pins] —\ ~delay_type min ‘The —nets, ~capacitance and -transition_time options are one of the ‘most useful and frequently used options of the report_timing command. These options help the designer to debug a particular path, in order to track the cause of a possible violation. The —nets option displays the fanout of each cell in the path report, while the —capacitance and the —transition_time options reports the lumped capacitance on the net and the transition time (slew rate) for each driver or load pin, respectively. Failure to include these options results in a timing report that does not include the information mentioned above. pt_shel1> report_timing —from int \ “to blockA/subB/carry_regi/D \ “nets -capacitance —transition_time The ~nworst option specifies the number of paths to be reported for each endpoint, while the -max_paths option defines the number of paths to be reported per path group for different endpoints. The default value of both these options is 1. pt_shell> report_timing —from (all_inputs} \ ~to fall_registers —data_pins] _\ —nworst 1000 -max_paths 500 report_constraint: Similar to DC, this command in PT checks for the DRC’s as defined by the designer or the technology library. Additionally, this command is also useful for determining the “overall health” of the design with regards to the setup and hold-time violations. The syntax of this command along with the most commonly used options is: PILI IDOE EMS os report_constraint —all_violators -max_delay =max_transition -min_transition —max_capacitance -min_capacitance -max_fanout —min_fanout -max_delay -min_delay ~clock_gating_setup —clock_gating_hold The -all_violators option displays all constraint violators. Generally, this option is used to determine at a glance, the overall condition of the design. ‘The report summarizes all the violators starting from the greatest, to the least violator for a particular constraint. pt_she11> report_constraint —all_violators Selective reports may be obtained by using the -max_transition, =min_transition, ~max_capacitance, =min_capacitance, max_fanout, ~min_fanout, -max_delay, and -min_delay options. The ~max_delay and -min_delay options report a summary of all setup and hold-time violations, while others report the DRC violations. ‘The ~clock_gating_setup and the -clock_gating_hold commands are used to display the setup/hold-time reports for the cell used for gating the clock. In addition, there are other options available for this command that may be useful to the designer. Full details of these options may be found in the PT User Guide. pt_shel1> report_constraint ~max_transition pt_shel1> report_constraint -min_capacitance pt_shel1> report_constraint -max_fanout pt_shel1> report_constraint -max_delay -min_delay pt_shel1> report_constraint —clock_gating_setup ~clock_gating_hold— A MGprer ’* © Initially use the report_constraint command to ascertain the amount, and the number of violations. The report produced provides a general estimate of the overall health of the design. Depending upon the severity of violations, a possible re-synthesis of the design may need to be performed. To further isolate the cause of the violation, the report_timing command should be used to target the violating path, in order to display a full timing report. report_bottleneck: This command is used to identify the leaf cells in the design that contribute to multiple violations. For instance, several violating path segments of a design may share a common leaf cell. Altering the size of this leaf cell (sizing up or down) may improve the timing (thus remove violation) of all the violating path segments. The syntax of this command along with the most commonly used options is: report_bottleneck —from
—to
through
~max_paths
—nworst_paths
‘The from and -to options facilitate the user in defining a path for bottleneck analysis. Since there may be multiple paths leading from a startpoint to a single endpoint, the through option may be used to further isolate the required path segment for bottleneck analysis. pt_shel1> report_bottleneck —from int \ to blockA/subB/carry_reg1/D \ through blockA/mux1/A1 ‘As the name suggests, the -max_cells option specifies the number of leaf cells to be reported. The default value is 20. ‘The —nworst_paths option specifies the number of paths to be reported for each endpoint, while the -max_paths option defines the number of paths to be reported per path group for different endpoints. The default value of both these options is 100. PRIME ME BASICS lil pt_shell> report_bottleneck —from int \ ~to blockA/subBicarry_regt/D \ through blockA/muxi/A1 —\ -max_cells 50 \ -nworst_paths 500 -max_patts 200 113.4 Other Miscellaneous Commands — write_sdf: command generates the SDF file that contains delays and timing checks for each instance in the design. PT uses the wire-load models to estimate the delays of cells during the pre-layout phase. For post-layout, PT uses the actual annotated delays (from the physical layout) while generating the SDF file. The syntax of this command along with the most commonly used options is: write_sdf -version 1.0 | 2.1 —no_net_delays =no_timing_checks
Unless explicitly specified, by default PT generates the SDF file in SDF version 2.1 format ‘The -no_net_delays option specifies that the interconnect delays (INTERCONNECT field in the SDF file) are not to be written out separately in the SDF file. In this case, they are included as part of the IOPATH delay of each cell. This option is mainly used during the pre- layout phase because of the fact that the interconnect delays are based upon the wire-load models. However, the interconnect delays after layout are real and are based on the routed design. Therefore, in general this option should be avoided while generating the post-layout SDF file. pt_she11> write_sdf -no_net_delays top_prelayout.sdf pt_shel1> write_sdf top_postlayout.sdfEE Specification of the -no_timing_checks option forces PT to omit the timing-checks section (TIMINGCHECK field) from the SDF file. As described in Chapter 10, the timing-checks section contains the setup/hold/width timing checks. This option is useful for generating the SDF file that may be used to validate, only the functionality of the design through dynamic simulation, without bothering to check for setup/hold/width timing violations. Once the design passes functional validation, full SDF (no ~no_timing_checks option) may be generated. pt_shel1> write_sdf—no_timing_checks top_prelayout.sdf ~write_sdf_constraints: This command is similar tothe write_constraints command in DC and performs the same function. It is used to generate the path timing constraints in SDF format, which is used by the layout tool to perform timing driven layout. The syntax of this ‘command along with the most commonly used options is: write_sdf_constraints -version <1.0 | 2.1> from
—to
through
~cover_design ~slack_lesser_than
—max_paths
—nworst
Unless explicitly specified, by default PT generates the constraint file in SDF version 2.1 format. The ~from, -to and —through options facilitate the user in specifying a particular path to be written to the constraint file ‘The —nworst option specifies the number of paths to be written to the constraint file for each endpoint, while the -max_paths option defines the number of paths to be considered for each constraint group. The default value of both these options is 1. The default settings of these options usually suffice for most designs. BIEL IIIORS DAMES ane pt_shel1> write_sdf_constraints —from int \ “to blockA/subB/carry_regt/D\ =through blockA/muxi/A1 —\ tdl.sdf ‘The —cover_design option is used to generate just enough unique path timing consiraints to cover the worst path for each path segment in the design. When specified, all other options such as, -nworst, ~to, ~from and through are ignored. Although this option is recommended by Synopsys, it should be used judiciously as it may produce long nun-times, especially for large designs. pt_she11> write_sdf_constraints —cover_design tdl.sdf ‘An alternative is to use the ~slack_lesser_than option that specifies that any path that has a slack value greater than the one specified is to be ignored. This means that a negative slack value for a path segment is considered to be most critical and has the highest priority. Thus all critical paths may be universally selected by specifying a low value for this option, hence will be written out to the constraint file. All high slack values (less. critical paths) will be ignored. pt_shel1> write_sdf_constraints ~slack_lesser_than 1.5 tdl.sdf — swap_cell: This command may be used to replace an existing cell in the design with another, having the same pinout. swap_cell
For example, if a path is failing due to hold-time violation and in order to fix the timing violation, you want to see the effect on the reported slack, by sizing down a particular leaf cell in the path, without changing the netlist In this case the swap_cell command may be used at the command line t© replace the existing cell with another, containing the same pinout. pt_shel1> swap_cell {U1} {get_lib_cell stdcell_lib/AND2X2]I In the above example, the instance Ul (say a 2-input AND gate with 8X drive strength) in a design is replaced by the AND2X2 gate (2X drive strength) from the “stdcell_lib” technology library. 11.4 — Chapter Summary Static timing analysis is one of the most critical steps for the entire ASIC chip synthesis flow. This chapter provides an introduction to PrimeTime that included PrimeTime invocation and its environment settings. PrimeTime is a stand-alone static timing analysis tool, which is based on the universally adopted EDA tool language, Tel. A brief section is included on the Tel language in context of PrimeTime, to facilitate the designer in writing, PrimeTime scripts and building upon them to produce complex scripts. The last section covers all relevant PrimeTime commands that may be used to perform static timing analysis, design debugging and writing delay information in SDF format. In addition, this section also covers topics on design entry and clock specification, both for pre-layout and post-layout. 12 STATIC TIMING ANALYSIS Using PrimeTime ‘The key to working silicon usually lies in successful completion of static timing analysis performed on a particular design, PT is a stand-alone tool by ‘Synopsys that is used to perform static timing analysis. It not only checks the design for required constraints that are governed by the design specifications, but also performs comprehensive analysis of the design. This capability makes STA one of the most important steps in the entire design flow and is used by many designers as a sign-off criterion to the ASIC vendor. ‘This chapter illustrates the part of the design flow where PT is utilized. It covers both the pre-layout and the post-layout phases of the ASIC design flow process. STA is closely integrated with the overall synthesis flow, therefore parts of this chapter may contain some repetition from elsewhere in this book. 12.1 Why Static Timing Analysis? designs using dynamic Traditional methods of analyzing gate-level ere designs. Today, the simulation are posing a bottleneck for large complextrend is towards incorporating system-on-a-chip (SoC), which may result in millions of gates per ASIC. Verifying such a design through dynamic simulation poses a nightmare to designers and may prove to be impossible due to long run-times (usually days and sometimes weeks). Furthermore, ‘dynamic simulation relies on the quality and coverage of the test-bench used for verification. Only parts of the logic that are sensitized are tested while the remaining parts of the design remain untested. To alleviate this problem, designers now resort to other means of verification such as STA to verify the ‘timing; and formal verification technique to verify the functionality of the zgate-level netlist against the source RTL. However, comprehensive sets of test-benches are still needed to verify the functionality of the source RTL ‘Thus, dynamic simulation is needed to solely verify the functionality of the design at the RTL level. This results in considerable reduction in run-time. The STA approach is infinitely fast compared to dynamic simulation and verifies all parts of the gate-level design for timing. Due to the similar nature of the synthesis and the STA engine, the static timing analysis is well suited for verifying synthesized designs. 12.1.1 What to Analyze? In general, four types of analysis is performed on the design, as follows: a) From primary inputs to all flops in the design. b) From flop to flop. ©) From flop to primary output of the design. 4) From primary inputs to primary outputs of the design. All four types of analysis can be accomplished by using the following commands: pt_shel1>report_timing —from [all_inputs) ‘ to [all_registers —data_pins} pt_shel1>report_timing —from [all_registers ~clock_pins] \ to [all_registers —data_pins] pt_shel1> report_timing from fall_registers ~clock_pins} \ to [all_outputs) pt_shel1> report_timing —from (all_inputs] \ to [all_outputs] Although, using the above commands is a cleaner method of generating reports for piping it to individual files for analysis, however, PT takes longer time to perform each operation. PT takes less time to generate the same results, if the following commands are used: ing —to [all_registers —data_pins] pt_shel1> report_ti pt_shel> report_timing —to fall_outputs] 12.2 Ti 1g Exceptions In most designs there may be paths that exhibit timing exceptions. For instance, some parts of the logic may have been designed to function as multicycle paths, while others may simply be false paths. Therefore, before analyzing the design, PT must be made aware of the special behavior exhibited by these paths. PT may report timing violation for multicycle paths if they are not specified as such. Also, path segments in the design that are not factual, must be identified and specified as false paths, in order to prevent PT from producing the timing reports for these paths. 12.2.1 Multicyele Paths PT by default, treats all paths in the design as single-cycle and performs the STA accordingly, ice., data is launched from the driving flop using the first edge of the clock, and is captured by the receiving flop using the second edge of the clock. This means that the data must be received by the receiving flop within one clock cycle (single clock period). In the multicycle mode, the data ‘may take more than one clock cyele to reach its destination. The amount of time taken by the data to reach its destination is govemed by the multiplier value used in the following command:set_multicycle_path
from
to
im clk at regA D Cy D regA regB ou KL fy Lf clk at regB TIT clk at regA, Single-cycle Timing Relationship clk at regB PL Ly Multi-cycle Timing Relationship Figure 12-1, Defining Relationship fora Single Clock Figure 12-1, illustrates the comparison between the single-cycle setup/hold- time relationship and the multicycle setup/hold-time relationship. In the multicycle definition, a multiplier value of 2 is used to inform PT that the data latching occurs at regB after an additional clock pulse. The following ‘command was used: pt_shel1> set_multicycle_path 2 —from regA -to regB Tn case of generated clocks, PT does not automatically determine the relationship between the primary clock and the derived clock, even when the create_generated_clock command is used. The single-cycle determination is independent of whether one clock is generated or not. It is based on the smallest interval between the open edge of the first clock to the closing edge of the second clock (in this case generated clock). For separate clocks with different frequencies, the set_multicycle_path command may be used to define the relationship between these clocks. By default, PT uses the most restrictive setup-time and hold-time relationship between these clocks. These may be overridden by using the set_multicycle_path command that defines the exact relationship between these clocks. Figure 12-2 illustrates an example, where a relationship exists between two separate clocks. During the single-cycle timing (default behavior), the setup and hold-time relationship occurs as shown. However, to specify a multicycle path between regd and regB, the following command is used: pt_shel1> set_multicycle_path 2-setup \ “from regA/CP —to regB/Dim>— D dD Ko out reg reg8 oki ke OS a ——_ rea | s Se, regB Single-cycle Timing Relationship cktat OF f eon Fete 2 ey 2 gs chk2 at See f reg8 8 Multi-cycle Timing Relationship Figure 12-2, Defining Relationship Between Separate Clocks The above example uses the multiplier value of 2 to define the setup-time relationship between the two clocks. The ~Setup option is used to define the setup-time relationship. However, this option also effects the hold-time relationship. PT uses a set of rules (explained in detail in PT User Guide) to determine the most restrictive relationship for the hold-time, between the two clocks. Therefore, PT may assume an incorrect hold-time relationship between the two clocks (shown as dotted line in Figure 12-2). To avoid this situation, the hold-time relationship between the two clocks should also be EE III ESE
set_min_delay 0 —from regA/CP to regB/D The zero value moves the hold-time relationship from the default value (Gotted line in Figure 12-2) to the desired edge (bold line in Figure 12-2), 12.2.2 False Paths Some designs may contain false timing paths. A false path is ideatified as a timing path that does not propagate a signal. False paths are created through the following pt_shell command: set_false_path —from
It must be noted that the above command does not disable the timing are of any cell, it merely removes the constraints of the identified path. Therefore, if the timing analysis is performed on the false path, an unconstrained timing report is generated. By default, PT performs STA on all the paths. This results in the generation of timing reports for all the path segments (including the false paths in the design). If the false path segment is failing timing by a large amount then the report may mask the violations of the real timing paths. This of course depends upon the options used for the report_timing command. Lets presume that there are multiple false paths in the design and they are all failing by a large amount during hold-time STA. However, the real timing paths are failing by a small margin. The false paths have not been identified because the user thinks that a large value of -nworst and -max_paths options will cover all the paths in the design (including the real timing paths), therefore identification of false paths is unnecessary. The user uses the following command to analyze the design:pt_shel1> report_timing ~from [all_inputs} \ -to [all_registers -data_pins] —\ -nworst 10000 -max_paths 1000 \ —delay_type min ‘The above method is certainly a viable approach and may not overly impact the run-time. However, a large value for the -nworst and -max_paths ‘options (used in the above example) causes PT to generate/display multiple timing reports, covering all the paths in the design, most of which are false paths. Only a selected few timing reports relate to the real timing violations. By using this approach, it becomes tedious to distinguish between the real timing path and the false timing paths. In addition, due to the large amount of timing reports generated, itis easy to mistakenly overlook a real timing path that is violating the timing constraints. To avoid this situation, false path identification is recommended before performing STA. In addition designers may use the -through option to further isolate the false path, It must be noted that the -through option significantly impacts the run- time, therefore should be used judiciously and the usage minimized. A better alternative is to disable the timing arc of the cell in the through list, using the set_disable_timing command explained later in this chapter. 12. 1 Helpful Hints for Setting False Paths ‘Timing exceptions impact the run-time. Setting multiple false paths in a design causes PT to slow down even further. Designers inadvertently specify the false paths with no regards to the proper usage, thereby impacting the run-time. The following suggestions are provided to help the designer in properly defining the false paths: a) Avoid using wildcard characters when defining false path, Failing to do so may result in PT generating a large number of false paths. For example: pt_shell> set_false_path -from iffo_reg’iCP —\ -to ofifo_reg’/D i r In the above case, if the ififo_reg and ofifo_reg are each part of a 16-bit register bank, PT will generate a large number of unnecessary false paths. Disabling the timing are of a common cell that is shared by the above paths is a better approach. The timing are is disabled using the set_disable_timing command, explained in the next section, b) Avoid using -through option for multiple false paths. Try finding a common cell that is shared by a group of identified false paths. Disable the timing are of this cell through the set_disable_timing command. ©) Do not define false paths for registers belonging to separate asynchronous clock domains. For instance, if there are two asynchronous clocks (say, CLK1 and CLK2) then the following command should be avoided: pt_shel1> set_false_path —from [all_registers ~clock CLK1] \ to [all_registers —clock CLK2] ‘The above command forces PT to enumerate every register in the design, thereby causing a big impact on the run-time. A superior altemative is to set the false paths on the clocks itself, rather than the registers. Doing, this prevents PT from enumerating all the registers in the design, therefore little or no impact on the run-time is observed. This is a preferred and efficient method of defining the asynchronous behavior of two clocks in PT. For example: pt_shel1> set_false_path ~from [get_clocks CLK1]_ | to [get_clocks CLK2] pt_shel1> set_false_path from [get_clocks CLK2] to [get_clocks CLK1] 12.3 Disabling Timing Arcs PT automatically disables timing paths that cause timing loops, in order to complete the STA on a design. However, designers sometimes find it necessary 0 disable other timing paths for various reasons, most prevalentbeing the need for PT to choose the correct timing path at all times. The timing ares may be disabled by individually disabling the timing arc of a cell, or by performing case analysis on an entire design 12.3.1 Disabling Timing Ares Individually During STA, sometimes it becomes necessary to disable the timing are of a particular cell, in order to prevent PT from using that arc while calculating, the path delay. The need to disable the timing arc arises from the fact that, in order to calculate the delay of a particular cell, PT uses the timing are that produces the largest delay. This sometimes is undesired and produces false delay values. This is explained in detail in Chapter 4. func_sig Ay >» bist_sig / 6 Figure 12-3. Disabling Timing Ares Another reason for disabling the timing are of an individual cell is to prevent PT from choosing the wrong timing path. Figure 12-3 illustrates a case where the control input (bist_mode) is used to select between signals, bist_sig and fine_sig, which are inputs to the multiplexer, MUXDI. The bist_sig signal is selected to propagate when the bist_mode signal is low, while the signal ‘finc_sig is allowed to pass when the bist_mode signal is high. During normal mode (functional mode), the signal bist_sig is blocked, while the signal func_sig is allowed to propagate. However, during test-mode (for e.g., for testing BIST logic), the bist_sig signal is selected to pass through, while blocking the func_sig signal. The application of this mux is described in detail in Chapter 8 (Figure 8-2), where it is used to bypass the input signals to the RAM, so that the logic previously shadowed by the RAM, thus un- scannable, can be made scannable. ‘Three timing ares exist for this cell — from Al to Z, A2 to Z, and from S to Z. Only the first two ares are shown in the above figure for the sake of clarity While performing STA to check the timing in functional mode, unless the user isolates the path using the —through option of the report timing command, PT may choose the wrong path (going through A2 to Z), thereby generating a false path delay timing report. Therefore it is prudent that the timing arc be disabled from A2 to Z of the cell MUXDI (instance name Ul) during functional mode STA. This is performed using the following pl_shell command: pt_shell> set_disable_timing from A2 -toZ {U1} 12.3.2 Case Analysis ‘An altemate solution to the above scenario is to perform case analysis on the design. By setting a logic value to the bist_mode signal, all timing arc: related to the bist_mode signal are disabled/enabled. In the above case, using. the following command disables the timing are from A2 to Z: pt_shell1> set_case_analysis 1 bist_mode The logic 1 value for the bist_mode signal forces PT to disable the timing arc from A2 to Z and enables the signal func_sig to propagate. By changing this value to 0, the arc from Al to Z is disabled and the bist_sig signal is allowed to propagate. Although, both the set_disable_timing and set_case_analysis commands perform the same function of disabling the timing arcs, the case analysis, approach is superior, for designs containing many such situations. For instance, a single command is used to analyze the entire design in either the normal mode or the test mode. However, the set_disable_timing commandis useful for disabling the timing are of an individual cell, when performing STA. 12.4 Environment and Constraints Apart from slight syntax differences, the environment and constraints settings for PT are same as that used for DC. The following commands exemplify these settings: pt_shell1> set_wire_load_model -name
pt_shel1> set_wire_load_mode < top | enclosed | segmented> pt_shel1> set_operating_conditions
pt_shel1> set_load 50 [all_outputs] pt_shel1>set_input_delay 10.0 -clock
[all_inputs] pt_shel1> set_output_delay 10.0 -clock
[all_outputs] Although, PT provides a multitude of options for the above commands, most designers only use a limited set of options, as shown above. Users are advised to refer to PT User Guide for full details regarding additional options available for each of the above commands. Since the behavior and function of these commands are same as the commands used for DC, no explanation is given here. The DC commands that are related to each of the above command are explained in detail in Chapter 6 12.4.1. Operating Conditions ~ A Dilemma In general, the design is analyzed for setup-time violations utilizing the ‘worst-case operating conditions, while the best-case operating condition is used to analyze the design for hold-time violations. The reason for using the worst-case operating conditions to perform setup- time analysis is that the delay values of each cell in the library depict the delays (usually large) of a device operating under the worst-case conditions (maximum temperature, low voltage and other worst-case process parameters). The large delay values cause the data-flow to slow down, which may result in a setup-time failure for a particular flop. ‘An opposite effect occurs for the data-flow when the design uses the best- case operating conditions for hold-time STA. In this case, the delay values (small) of each cell in the technology library depict the best-case operating conditions (minimum temperature, high voltage and other best-case process parameters). Therefore, the data-flow now encounters less delay for it to reach its destination, ie., the data arrives faster than before, which may cause old-time violations at the input of the register. By analyzing the design at both comers of the operating conditions, a time- ‘window is created that states ~ if the device operates within the range defined by both operating conditions, the device will operate successfully. 12.5 Pre-Layout After successfull synthesis, the netlist obtained must be statically analyzed to check for timing violations. The timing violations may consist of either setup and/or hold-time violations. ‘The design was synthesized with emphasis on maximizing the setup-time, therefore you may encounter very few setup-time violations, if any. However, the hold-time violations will generally occur at this stage. This is due to the data arriving too fast at the input of sequential cells with respect to the clock. If the design is failing setup-time requirements, then you have no other option but to re-synthesize the design, targeting the violating path for farther optimization, This may involve grouping the violating paths or over- constraining the entire sub-block, which had violations. However, if the design is failing hold-time requirements, you may either fix these violationsat the pre-layout level, or may postpone this step until after layout. Many designers prefer the latter approach for minor hold-time violations (also used here), since the pre-layout synthesis and timing analysis uses the statistical wire-load models and fixing the hold-time violations at the pre-layout level may result in setup-time violations for the same path, after layout. However, if the wire-load models truly reflect the post-routed delays, then it is prudent to fix the hold-time violations at this stage. In any case, it must be noted that gross hold-time violations should be fixed at the pre-layout level, in order to mize the number of hold-time fixes, which may result after the layout, 12.5.1 Pre-Layout Clock Specification In the pre-layout phase, the clock tree information is absent from the netlist. Therefore, it is necessary to estimate the post-route clock-tree delays up- front, during the pre-layout phase in order to perform adequate STA. In addition, the estimated clock transition should also be defined in order to prevent PT from calculating false delays (usually large) for the driven gates. The cause of large delays is usually attributed to the high fanout normally associated with the clock networks. The large fanout leads to slow input transition times computed for the clock driving the endpoint gates, which in tum results in PT computing unusually large delay values for the endpoint gates. To prevent this situation, it is recommended that a fixed clock transition value be specified at the source. The following commands may be used to define the clock, during the pre- layout phase of the design. pt_shel1> create_clock —period 20 -waveform [list 0 10] [list CLK] pt_shell> set_clock_latency 2.5 [get_clocks CLK] pt_shel1> set_clock_transition 0.2 [get_clocks CLK] pt_shell> set_clock_uncertainty 1.2 setup [get_clocks CLK] pt_she11> set_clock_uncertainty 0.5 hold _{get_clocks CLK] ‘The above commands specify the port CLK as type clock having a period of 20ns, the clock latency as 2.5ns, and a fixed clock transition value of 0.2ns. The clock latency value of 2.5ns signifies that the clock delay from the input port CLK to all the endpoints is fixed at 2.5ns. In addition, the 0.2ns value of the clock transition forces PT to use the 0.2ns value, instead of calculating its ‘own, The clock skew is approximated with I.2ns specified for the setup-time, and 0.Sns for the hold-time. Using this approach during pre-layout yields a realistic approximation to the post-layout clock network results. 12.5.2 Timing Analysis The following script gathers all the information provided above and may be used to perform the setup-time STA on a design. PT script for pre-layout setup-time STA # Define the design and read the netlist only set active_design
read_db -netlist_only $active_design.db # or use the following command to read the Verilog netlist. # read_verilog Sactive_design.v current_design $active_design set_wire_load_model
set_wire_load_mode < top | enclosed | segmented > set_operating_conditions
# Assuming the SOpf load requirement for all outputs set_load 50.0 [all_outputs] # Assuming the clock name is CLK with a period of 30ns. # The latency and transition are frozen to approximate the # post-routed values.create_clock -period 30 —waveform [0 15] CLK set_clock_latency 3.0 [get_clocks CLK] set_clock_transition 0.2 [get_clocks CLK] set_clock_unceriainty 1.5 -setup [get_clocks CLK] # The input and output delay constraint values are assumed # 10 be derived from the design specifications. set_input_delay 15.0 -clock CLK [all_inputs} set_output_delay 10.0 -clock CLK [all_outputs} # Assuming a Tel variable TESTMODE has been defined. # This variable is used to switch between the normal-mode and # the test-mode for static timing analysis. Case analysis for # normal-mode is enabled when TESTMODE = 1, while # case analysis for test-mode is enabled when TESTMODE = 0. # The bist_mode signal is used from the example illustrated in # Figure 12. set TESTMODE [getenv TESTMODE] if {STESTMODE == 1} set_case_analysis 1 [get_port bist_mode] Jelse { ; set_case_analysis 0 [get_port bist_mode] 4# The following command determines the overall health # of the design. report_constraint -all_violators +# Extensive analysis is performed using the following commands. Feport_timing -to [all_registers —data_pins] report timing to [all_outputs) Also, specification of the startpoint and the endpoint for the —from and the ~to options of the report_timing command may be used to target selective paths. In addition, further isolation of the selected path may be achieved by using the -through option. By default, PT performs the maximum delay analysis, therefore specification of the max value for the ~delay_type option of the report_timing command is not needed. However, in order to display all timing paths of the design, the -nworst and/or -max_paths options may be utilized. ‘As mentioned in the previous chapter, the report_constraint command is, used to determine the overall health of the design. This command should be initially used to check for DRC violations (max_transition, max_capacitance, and max_fanout etc.). In addition, this command may also be used to generate a broad spectrum of setup/hold-time timing reports for the entire design. Note that the timing report produced by the report_constraint command does not include a full path timing report. It only produces a summary report for all violating paths per endpoint (assuming that the -all_violators option is used). The report_timing command is used to analyze the design in more detail This command produces a timing report that includes the full path from the startpoint to the endpoint, This command is useful for analyzing the ‘ailing path segments of the design. For instance, it is possible to narrow down the cause of the failure, by utilizing the ~capacitance and -net options of this, command. 12.6 Post-Layout The post-layout steps involve analyzing the design for timing with actual delays back annotated. These delays are obtained by extracting the layout database. The analysis is performed on the post-routed netlist that contains the clock tree information. Various methods exist for porting the clock tree to DC and PT, and have been explained in detail in Chapter 9. Let us assume that the modified netlist exists in db format, At this stage, a comprehensive STA should be performed on the design. This involves analyzing the design for both the setup and hold-time requirements. In general, the design will pass timing with ample setup-time, but may failold-time requirements. In order to fix the hold-time violations, several methods may be employed. These are explained in Chapter 9. After incorporating the hold-time fixes, the design must be analyzed again to verify the timing of the fixes. 12.6.1 What to Back Annotate? One of the most frequent questions asked by designers is - What should I back annotate to PT, and in what format? Chapter 9, discusses various types of layout database extraction and associated formats. Pros and cons of each format are discussed at length. It is recommended that the following types of information be generated from the layout tool for back annotation to PT in order to perform STA: a) Net RC delays in SDF format. ) Capacitive net loading values in set_load format. ©) Parasitic information for clock and other critical nets in DSPF, RSPF or SPEF file formats. ‘The following PT commands are used to back annotate the above information: read_sdf: As the name suggests, this command is used read the SDF file. For example pt_shel1> read_sdf rc_delays.sdf — source: PT uses this command to read external files in Tcl format. Therefore, this command may be used to back annotate the net capacitances file in set_load file format, For example: pt_shel> source capacitance.pt EEE LOI EIEN IES aaeeeaee _ read_parasitics: This command is utilized by PT to back-annotate the parasitics in DSPF, RSPF and SPEF formats. You do not need to specify the format of the file. PT automatically detects it, For example: pt_shel1> read_parasitics clock_info.spf 12.6.2 Post-Layout Clock Specification Similar to pre-layout, the post-layout timing analysis uses the same commands, except that this time the clock is propagated through the entire clock network. This is because the clock network now comprises of the clock tree buffers. Thus the clock latency and skew is dependent on these buffers. Therefore, fixing the clock latency and transition to a specified value is not required for post-route clock specification. The following commands exemplify the post-route clock specification. pt_shel1> create_clock -period 20 -waveform [list 0 10] [list CLK] pt_shel1> set_propagated_clock [get_clocks CLK] As the name suggests, the set_propagated_clock command propagates the clock throughout the clock network. Since the clock tree information is now present in the design, the delay, skew, and the transition time of the clock is calculated by PT, from the gates comprising the clock network. 12.6.3 Timing Analysis Predominantly, the timing of the design is dependent upon clock latency and skew i.e,, the clock is the reference for all other signals in the design. It is therefore prudent to perform the clock skew analysis before attempting to analyze the whole design. A useful Tel script is provided by Synopsys through their on-line support on the web, called SolvNET. You may download this script and run the analysis before proceeding. If the Tel script is not available, then designers may write their own script, to generate @ eport for the clock delay starting from the source point of the clock andending at all the endpoints. The clock skew and total delay may be determined by parsing the generated report. Although setting the clock uncertainty for post-layout STA is not needed, some designers prefer to specify a small amount of clock uncertainty, in ‘order to produce a robust design. Let us assume that the clock latency and skew is within limits. The next step is to perform the static timing on the design, in order to check the setup and hold-time violations. The setup-time analysis is similar to that performed for pre-layout, the only difference being the clock specification (propagate the clock) as described before. In addition, during post-route STA, the extracted information from the layout database is back annotated to the design. The following script illustrates the process of performing post-route setup- time STA on a design. The items in bold reflect the differences between the pre and post-layout timing analysis. PT script for post-layout setup-time STA # Define the design and read the netlist only set active_design
read_db -netlist_only Sactive_design.db # or use the following command to read the Verilog netlist #read_verilog $active_design.v current_design $active_design set_wire_load_model
set_wire_load_mode
# Use worst-case operating conditions for setup-time analysis set_operating_conditions
read_db -netlist_only Sactive_design.db # or use the following command to read the Verilog netlist. # read_verilog $active_design.v current_design Sactive_design set_wire_load_model
set_wire_load_mode < top | enclosed | segmented > 4# Use best-case operating conditions for hold-time analysis set_operating_conditions
# Assuming the SOpf load requirement for all outputs set_load 50.0 [all_outputs} # Back annotate the best-case (extracted) layout information. source capacitance_best.pt_ #actual parasitic capacitances read_sdf rc_delays_best.sdf #actual RC delays # Assuming the clock name is CLK with a period of 30ns. # The latency and transition are frozen to approximate the # post-routed values, create_clock —period 30 -waveform [0 15] CLK set_propagated_clock {get_clocks CLK] set_clock_uncertainty 0.2 hold [get_clocks CLK] +# The input and output delay constraint values are assumed # to be derived from the design specifications. set_input_delay 15.0 clock CLK [all_inputs) set_outpul_delay 10.0 -clock CLK [all_outputs} # Assuming a Tel variable TESTMODE has been defined. # This variable is used to switch between the normal-mode and # the test-mode for static timing analysis. Case analysis for # normal-mode is enabled when TESTMODE = |, while # case analysis for test-mode is enabled when TESTMODE = 0. # The bist_mode signal is used from the example illustrated in # Figure 12-3. set TESTMODE [getenv TESTMODE] if (STESTMODE == 1} { set_case_analysis 1 [get_port bist_mode] Jelse { set_case_analysis 0 [get_port bist_mode] } # The following command determines the overall health # of the design. report_constraint —all_# Extensive analysis is performed using the following commands. report_timing -to [all_registers ~data_pins] ‘ -delay_type min report_timing -to fall_outputs] —delay_type min 12.7, Analyzing Reports The following sub-sections illustrate the timing report generated by the report_timing command, both for pre-layout and post-layout analysis. A clock period of 30ns is assumed for the clock named tck of an example tap_controller design. 12.7.1 Pre-Layout Setup-Time Analysis Report Example 12.1 illustrates the STA report generated during the pre-layout phase. The ideal setting for the clock is assumed using the pre-layout clock specification commands The following command was used to instruct PT to display a timing report for the worst path (maximum delay), starting at the input port tdi and ending at the input pin ofa flip-flop. pt_shel1>report_timing from tdi to {all_registers ~data_pins] The default settings were used ie. the -delay_type option was not specified, therefore PT performs the setup-time analysis on the design by assuming the max setting for the ~delay_type option. Furthermore, PT uses the default values of -nworst and -max_paths options. This ensures that the timing report for a single worst path (minimum slack value) is generated. Alll other paths starting from the ‘di input port and ending at other flip-flops will have a higher slack value, thus will not be displayed Example 12.1 Report :timing ~path full ~delay max ~max_paths 1 Design. : tap_controller Version : 1998.08-PT2 Date: Tue Nov 17 11:16:18 1998 Startpoint: tdi (input port clocked by tek) Endpoint: ir_block/ir_reg0 (rising edge-triggered flip-flop clocked by tek) Path Group: tek Path Type: max Point Incr Path clock tek (rise edge) 0.00 0.00 clock network delay (ideal) 0.00 0.00 input extemal delay 15,00 15.00 r tdi (in) 0.00 15.00 pads/tdi (pads) 0.00 15.00 r pads/tdi_pad/Z (PADIX) 132 16.32 5 pads/tdi_signal (pads) 0.00 16.32 ir_block/tdi (ir_block) 0.00 16.32 6 ir_block/UN/Z (AND2D4) 0.28 16.60 r ir_block/U2/ZN (INVOD2) 0.33 16.93 £ ir_block/U1234/2, (OR2D0) 1.82 18.75 f ir_block/U1S6/ZN (NOR3D2) 1.05 19.80 ir_block/ir_reg0/D (DFF1X) 0.00 19.80 © data arrival time 19.80 clock tek (rise edge) 30.00 30.00 clock network delay (ideal) 2.50 32.50 - block/ir_reg0/CP (DFF1X) 32.50 rlibrary setup time 0.16 31.74 data required time 31.74 data required time 31.74 data arrival time = 19.80 slack (MET) 11.94 Itis clear from the above report that the design meets the required setup-time with a slack value of 11.94ns. This means that there is a margin of at least 11.94ns before the setup-time of the endpoint flop is violated. 12.7.2 Pre-Layout Hold-Time Analysis Report, Example 12.2 illustrates the STA report generated during the pre-layout phase. The ideal setting for the clock is assumed using the pre-layout clock specification commands In order to perform hold-time STA, the following command was used to instruct PT to display a timing report for a minimum delay path, existing between two flip-flops. pt_shel1> report_timing -from [all_registers -clock_pins] _\ ~to [all_registers —data_pins] \ ~delay_type min In the above case, the ~delay_type option was specified with min value, thus informing PT to display the best-case timing report. The default values of all other options were maintained. Example 12.2 Report :timing path full delay min ~max_paths | (OE EEIR PANES CONES 257 Design: tap_controller Version : 1998,08-PT2 Date: Tue Nov 17 11:16:18 1998 Startpoint: state_block/st_reg9 (rising edge-triggered flip-flop clocked by tek) Endpoint: state_block/bp_reg? (rising edge-triggered flip-flop clocked by tek) Path Group: tck Path Type: min Point Incr Path clock tek (rise edge) 0.00 0.00 clock network delay (ideal) 2.50 250 state_block/st_reg9/CP (DFF1X) 0.00 250 r state_block/st_reg9/Q(DFF1X) 0.05 255 state_block/UI5/Z. (BUFF4X) 0.15 2.70 5 state_block/bp_reg2/D (DFF1X) 0.10 2.80 1 data arrival time 2.80 clock tek (rise edge) 0.00 0.00 clock network delay (ideal) 2.50 2.50 state_block/bp_reg2/CP (DFF1X) 2.50 r library hold time 0.50 3.00 data required time 3.00 data required time 3.00 data arrival time -2.80 slack (VIOLATED) 0.20 A negative slack value in the above report implies that the hold-time of the endpoint flop is violated by 0.20ns. This is due to the data arriving too fast with respect to the clock.To fix the hold-time for the above path, the setup-time analysis should also be performed on the same path in order to find the overall slack margin. Doing this provides a time frame in which the data can be manipulated For the above example, if the setup-time slack value is large (say, 10ns) then the data can be delayed by 0.20ns or more (say Ins), thus providing ample hold-time at the endpoint flop. However, if the setup-time slack value is less (say 0.50ns) then a very narrow margin of 0.30ns (0.50ns minus 0.20ns) exists. Delaying the data by an exact amount of 0.20ns will produce the desired results, leaving 0.30ns as the setup-time. However, the minute time ‘window of 0.30ns makes it extremely difficult for designers to fix the timing violation — delay the data just enough, so that it does not violate the setup- ‘time requirements, In this case, the logic may need to be re-synthesized and the violating path targeted for further optimization. 12.7.3 Post-Layout Setup-Time Analysis Report ‘The same command that is used for pre-layout setup-time STA also performs the post-layout setup-time analysis. However, the report generated is slightly different, in the sense that PT uses asterisks to denote the delays that are back annotated. Example 12.3 illustrates the post-layout timing report generated by PT to perform the setup-time STA. The same path segment shown in Example 12.1 (the pre-layout setup-time STA) is targeted to demonstrate the differences between the pre-layout and the post-layout timing reports. Example 12.3 Report :timing path full delay max ~max_paths | Design: tap_controller Version : 1998.08-PT2 Date: Wed Nov 18 12:14:18 1998 SLAC LIMING ANAL TOLD oe Startpoint: tdi (input port clocked by tek) Endpoint: ir_block/ir_reg0 (rising edge-triggered flip-flop clocked by tek) Path Group: tek Path Type: max Point Incr Path clock tek (rise edge) 0.00 0.00 clock network delay (propagated) 0.00 0.00 input extemal delay 15.00 15.00 ‘di (in) 0.00 15.00 pads/tdi (pads) 0.00 15.00 pads/tdi_pad/Z (PADIX) 130 16.30 pads/tdi_signal (pads) 0.00 16.30 ir_block/tdi (it_block) 0.00 16.30 ir_block/UI/Z (AND2D4) 022" 16.52 ir_block/U2/ZN (INVOD2) 0.24" 16.76 ir block/U1234/Z (OR2D0) 0.56% 17.32 ir blocK/UIS6/ZN (NOR3D2) 0.83% 18.15 ir_block/ir_reg0/D (DFF1X) 103* 19.18 data arrival time 19.18 clock tek (rise edge) 30.00 30.00 clock network delay (propagated) 2.00 32.00 ir_block/ir_reg0/CP (DFF1X) 32.00 library setup time -0.76 31.24 data required time 31.24 data required time 31.24 data arrival time - 19.18 slack (MET) 12.06 By comparison, the post-layout timing results improve from a slack value of 11.94 (in Example 12.1) to 12.06, This variation is attributed to the
report_timing from state_block/st_reg9/CP_\ -to state_block/bp_reg2/D \ —delay_type min \ —nets -capacitance -transition_time In the above command, additional options namely, —nets, capacitance and ~transition_time are used. Although the above command uses all three options concurrently, these options may also be used independently. The timing report shown in Example 12.5 is identical to the one shown in Example 12.2, except that it uses the above command to produce the timing eport that includes additional information on the fanout, load capacitance and the transition time. EAE EINEIISS ONL SDS a“ Example 12.5 Report: timing path full delay min ~max_paths 1 Design : tap_controller Version : 1998.08-PT2 Date: Tue Nov 17 11:16:18 1998 Startpoint: state_block/st_reg9 (rising edge-triggered Endpoint: state_block/bp_reg2 flop clocked by tek) (rising edge-triggered flip-flop clocked by tck) Path Group: tek Path Type: min Point Fanout Cap Trans Iner Path clock tek (rise edge) 0.30 0.00 0.00 clock network delay (ideal) 2.50 2.50 state_block/st_reg9/CP (DFF1X) 0.30 0.002.507 state_block/st_reg9/Q (DFF1X) 0.12 0.05 2.555 state_block/n 1234 (net) 2 0.04 state_block/U15/Z (BUFF4X) 032° 04S 2.708 state_block/n2345 (net) 8 2.08 state_block/bp_reg2/D (DFFIX) 041 0.10 data arrival time 2.80 clock tck (rise edge) 0.30 0.00 0.00 clock network delay (ideal) 2.50 2.50 state_block/bp_reg2/CP (DFFIX) 2.506 library hold time 0.50 3.00 data required time 3.00 data required time 3.00data arrival time -2.80 slack (VIOLATED) ~0.20 By analyzing the timing report shown in Example 12.5, it « the cell U1S (BUFF4X) has a fanout of 8, witha load apacianee cea Ose The computed cell delay is 0.15ns. As stated before, the hold-time violation is fixed by delaying the data with respect to the clock. Therefore, if the dri strength of the cell UIS is reduced from 4X to 1X, it will result in an increased delay value for the cell UIS, due to the increase in transition time, This increase in delay value will contribute towards slowing the entre data path, thus removing the hold-time vi ultin ; ath, thus removing the oldsime wlaton, The resuig ining report Example 12.6 Report : timing path full delay min ~max_paths | Design : tap_controller Version : 1998,08-PT2 Date Tue Nov 17 11:16:18 1998 Startpoint: state_block/st_reg9 (rising edge-triggered flip-flop clocked by tek Endpoint: state_block/bp_reg2 ° aa (rising edge-triggered flip-flop clock Path Group: tek nop cocked byte) Path Type: min Point Fanout Cap Trans Incr Path clock tek (rise edge) 0.30 0.0 0 0 clock network delay (ideal) 230 230 EE EE EEE IIIS Ve state_block/st_teg9/CP (DFF1X) 030 0.00 250r state_block/st_reg9/Q (DFF 1X) 0.12 005 2554 state_block/n 1234 (net) 2 0.04 state_block/U1S/Z. (BUFFIX) 1.24 0.40 295r state_block/n2345 (net) 8 208 state_block/bp_reg2/D (DFF1X) 125 0.10 305 data arrival time 3.05 clock tck (rise edge) 0.30 0.00 clock network delay (ideal) 2.50 state_block/bp_reg2/CP (DFFIX) library hold time 0.50 data required time data required time 3.00 data arrival time = 205 slack (MET) G05 In the timing report shown above, by reducing the drive strength of the cell UIS from 4X to 1X, an increase in transition time, and therefore an increase in the incremental delay of the gate is achieved. This impacts the overall data path, which results in a positive slack margin of 0.0Sns, thus removing the hold-time violation for the endpoint flop. 12.8.2 Cell Swapping PT allows the ability to swap cells in the design, as long as the pinout of the existing cell is identical to the pinout of the replacement cell. This capability allows designers to perform what-if scenarios without leaving the pt_shell session. In Example 12.6, the cell BUFFIX (having a lower drive strength and identical pinout to BUFF4X) replaced the cell BUFF4X. However, the process of replacement was not discussed. There are two methods to achieve this. The netlist could be modified manually before performing STA; oF designers may use the cell swapping capability of PT to perform the what-ifSTA scenarios on the violating path segments, before manually modifying the netlist. Manually modifying the netlist before performing the what-if scenarios is certainly a viable approach. However, it is laborious. For the case shown in Example 12.6, first the pt_shell session is terminated, then the netlist modified manually (BUFF4X replaced with BUFFIX), and finally the Pt_shell session is invoked again to re-analyze the previously violating path segment (from st_reg9 to bp_reg2). If the modifications to the netlist do not produce the desired results (the path segment is still violating timing) then the whole process needs to be repeated. This approach is certainly tedious and wasteful A preferred altemative is to use the following command to replace the existing cell with another: pt_she11> swap_cell {U15} [get_lib_cell stdcell_lib/BUFF1X] ‘The above command can be used within the pt_shell session to replace the existing cell BUFF4X (instanced as UIS), with BUFFIX (from the “stdcell_lib” technology library), and the path segment re-analyzed to view the effect of the swapping. This provides a faster approach to debugging the design and visualizing the effect of cell swapping without terminating the Pt_shell session, Note that the cell swapping only occurs inside the PT memory. The physical netlist remains unmodified. If the path segment and the rest of the design Passes STA, this modification should be incorporated in the netlist by modifying the netlist manually. 12.8.3 Bottleneck Anal Sometimes a design may contain multiple path segments that share a common leaf cell. If these path segments are failing timing then changing the drive strength (sizing it up or down) of the common leaf cell may remove the timing violation for all the path segments. PT provides the capability of identifying a common leaf cell that is shared by multiple violating path segments in a design. This is termed as bottleneck analysis and is performed by using the report_bottleneck command. In Example 12.2, a hold-time violation exists for the path segment starting from state_block/st_reg9 and ending at state block/bp_reg2. However, the hold-time violation also exists (shown in Example 12.7) for the path segment starting from the same startpoint (state_block/st_reg9) but ending at a different endpoint, state_block/enc_reg0. Example 12.7 Report :timing path full ~delay min —max_paths 1 Design: tap_controller Version : 1998.08-PT2 Date: Tue Nov 17 11:24:10 1998 Startpoint: state_block/st_reg9 (rising edge-triggered flip-flop clocked by tck) Endpoint: state_block/enc_reg0 (rising edge-triggered flip-flop clocked by tck) Path Group: tek Path Type: min Point Incr Path clock tek (rise edge) 0.00 0.00 clock network delay (ideal) 2.50 2.50 state_block/st_reg9/CP (DFFIX) 0.00 250 5 state_block/st_reg9/Q(DFFIX) 0.05 255 1 state_block/UIS/Z (BUFF4X) 0.15 2.70 + state_block/enc_reg0/D (DFFIX) 0.07 27+ data arrival time 277clock tek (rise edge) 0.00 0.00 clock network delay (ideal) 250 2.50 state_block/enc_reg0/CP (DFF1X) 250 library hold time 0.50 3.00 data required time 3.00 data required time 3.00 data arrival time -2.77 slack (VIOLATED) 0.23 In the timing report shown above, the hold-time violation is 0.23ns. Visual inspection of the two timing reports (Example 12.2 and 12.7) reveal that a single cell BUFF4X (instanced as ULS) is common to both path segments (st_reg9 > bp_reg?, and st_reg9 > enc_reg0). Thus, reducing the drive strength of this cell may eliminate the hold-time violation for both the path segments. However, this process involves careful visual inspection of all the path segments in the design in an effort to identify the common leaf cell between the startpoint and the endpoint of all the violating path segments. This method can be extremely tedious fora large number of path segments. The recommended method of identifying a common leaf cell between the startpoint and the endpoint of all the violating path segments is to perform the bottleneck analysis. For the above case (in Example 12.7), the following command was used to identify the common leaf cell shared by the violating path segments. Pt_shell> report_bottleneck Example 12.8 illustrates a report that was generated by PT, identifying the cell U15 (BUFF4X) as the common leaf cell shared by the two path segments mentioned above. Example 12.8 Report: bottleneck —cost_type path_count ~max_cells 20 -nworst_paths 100 Design: tap_controller Version : 1998.08-PT2 1998 Date: Tue Nov 17 12:09: Bottleneck Cost = Number of violating paths through cell Bottleneck Cell Reference Cost us BUFF4X 2.00 Once the cell has been identified, it can be swapped with another in order to fix the timing violation of multiple path segments. Once again, a complete STA should be performed on the entire design. Any required changes (due to cell swapping etc.) should be manually incorporated in the final netlist. 12.8.4 Clock Gating Checks Usually, low power designs contain clocks that are enabled by the gating logic, only when needed. For such designs, the cell used for gating the clock should be analyzed for setup and hold-time violations, in order to avoid clipping of the clock. The setup and hold-time requirements may be specified through the set_clock_gating_check command explained in Chapter 11. For example: pt_shel1> set_clock_gating_check setup 0.5 -hold 0.02 tck270 Example 12.9 illustrates the clock gating report that utilized the setup and hold-time requirements specified above for the gated clock, “tek”, The following command was used to generate the report: pt_shel1> report_constraint -clock_gating_setup \ ~clock_gating hold \ ~all_violators Example 12.9 Report: constraint ~all_violators path slack_only —clock_gating_setup ~clock_gating_hold Design: tap_controller Version : 1998.08-PT2 Date: Tue Nov 17 12:30:07 1998 clock_gating_setup Endpoint Slack state_block/U1789/A1 state_block/U1346/A1 1.02 (VIOLATED) -0.98 (VIOLATED) clock_gating hold Endpoint Slack 70.10 (VIOLATED) 0.02 (VIOLATED) state_block/U1789/A1 state_block/U1450/A1 EEE IS EEE It is important to note that the ~all_violators option should be used in addition to the ~clock_gating_setup and the ~clock_gating_hold options. Failure to include the ~all_violators option will result in a report disp/aying only the cost function of the failures, instead of the identifying the failed gates. ‘The -verbose option may also be included to display a full path report for the purpose of debugging the cause of the violation, and how it may be corrected. Example 12.10 illustrates one such report that was generated by using the following command: pt_she11> report_constraint -clock_gating_hold —\ ~all_violators \ verbose Example 12.10 Report: constraint ~all_violators path slack_only —clock_gating_hold Design: tap_controller Version : 1998.08-PT2 Date: Tue Nov 17 12:32:10 1998 Startpoint: state_blockitst_segl] (ising edge-triggered flip-flop clocked by tck) Endpoint: state_block/U1789 (rising clock gating-check end-point clocked by tek) Path Group: **clock_gating_default** Path Type: min Point Incr Path clock tek (rise edge) 0.00 0.00 clock network delay (propagated) 2.25 2.2522 state_block/tst_regl /CP (DFFIX) 0.00 225 5 state_block/tst_regl1/Q (DEFIX) 0.05 * 2.30 1 state_block/UI789/A2 (AND4X) 0.12 * 242 4 data arrival time 2.42 clock tek (rise edge) 0.00 0.00 clock network delay (propagated) 2.50 2.50 state_block/U1789/A1 (AND4X) 250 6 clock gating hold time 0.02 252 data required time 2.52 data required time 2.52 data arrival time -2.42 slack (VIOLATED) -0.10 In the above example, the ANDOX gate is used to gate the clock, “tek”. Pin 'A2 of this cell is connected to the enabling signal, whereas the clock drives pin Al of this cell. As can be seen from the report the hold-time is being Violated by the gating logic. In order to fix the hold-time violation, the cell [ANDOX may be sized down to slow the data path. 129 Chapter Summary Static timing is key to success, with working silicon as final product. Static timing not only verifies the design for timing, but also checks all the path ‘segments in the design. This chapter covers all the steps necessary to analyze ‘a design comprehensively through static timing analysis. ‘The chapter started by comparing static timing analysis to the dynamic simulation method, as the tool for timing verification. It was recommended that the former method be used as an alternative to dynamic simulation approach. This was followed by a detailed discussion on timing exceptions, ‘which included multicyele and false paths. Helpful hints were provided to guide the user in choosing the best approach. 'A separate section was devoted to disabling the timing arcs of cells and to perform case analysis, The case analysis was recommended over individual disabling of timing ares, for designs containing many cells with timing arcs that are related to a common signal. An example case of DFT logic was provided as an application of case analysis. In addition, the process of analyzing designs both for pre-layout and post- layout was covered in detail, which included clock specification and timing analysis. Finally, a comprehensive section was devoted to timing reports followed by advanced analysis of the timing reports. At each step, example reports were provided and explained in detail.Appendix RM=\ DC_SHELL=de_shell CLEANUP = $(RM) command.log pt_shell_command.log # # Modules for Synthesis # top: @make $(SYNDB)ttop.db S(SYNDB)/top.db: S(SCRIPTytop.scr_\ S(SRC)top.v \ S(SYNDBYAdb—\ S(SYNDB)/B.db S(DC_SHELL) -f S(SCRIPT)top.scr | tee top.log.S(MV) top.log S(LOG) S(MV) topsv — S(NETLIST) S(MV)_top.db— S(SYNDB) S(CLEANUP) A @make $(SYNDBY/A.db S(SYNDBY/A.db: S(SCRIPTYAscr \ S(SROAW \ S(SYNDBY/A.db S(DC_SHELL) -f S(SCRIPTY/A.scr | tee Alog S(MV) Alog — S(LOG) S(MV) Asv — S(NETLIST) S(MV) Ab S(SYNDB) S(CLEANUP) B: @make S(SYNDBYB.db S(SYNDB)B.db: S(SCRIPTYB.scr S(SRCVB.v \ S(SYNDBYB.db S(DC_SHELL) -f $(SCRIPT)B.scr | tee B.log S(MV) Blog (LOG) S(MV) Bav S(NETLIST) S(MV) Bab S(SYNDB) S(CLEANUP) Index synopsys_desetup, 45, 107 synopsys_desetup file, 17 synopsys_ptsetup, 45, 210 synopsys_pt.stup file, 18 A all_inputs, 232 all_outputs, 233 all registers, 232 all_violators, 225,271 1 analyze command, 55 area constraints, 126 area optimization, 145, ASIC design flow, 2 ‘assign statements, 165 asterisks, 258, 260, ATPG, 151 Atibutes, $1 Awk, 192 B back annotated delays, 258, 260, back annotation to PT, 248 balance_buffer, 144 balanced clock tree, 173 balanced_troe, 66 Behavior Compiler, 5 Behavioral level, best case tee, 67 BIST, 148 blocking assignments, 96 boolean, 138 bottleneck analysis, 226, 267 bottom-up compile, 16 boundary sean, 148, braces, 215 brute force method, 177 buffer_constant, 166 bus naming style 18, 163 bypassing RAMS, 156 c ‘capacitance, 224 cease analysis, 222, 240, 241 cell swapping, 265, cell_footprint, 186 cchange_names, 163 ‘characterize, 133 characterize-compile, 130 charaterize-write-script, 85 check_design, 164 check_test, 150 clock gating, 142,220, 269 clock gating logic, 83 clock latency, 25, 174, 201, 217,249
You might also like
Book Constraining Designs For Synthesis and Timing Analysis PDF
PDF
No ratings yet
Book Constraining Designs For Synthesis and Timing Analysis PDF
2 pages
Power Optimization (Part 2) : Xuan Silvia' Zhang
PDF
No ratings yet
Power Optimization (Part 2) : Xuan Silvia' Zhang
26 pages
Cadence SOC Encounter
PDF
0% (1)
Cadence SOC Encounter
222 pages
Sharif Digital Flow Introduction Part I: Synthesize & Power Analyze
PDF
No ratings yet
Sharif Digital Flow Introduction Part I: Synthesize & Power Analyze
128 pages
PD
PDF
No ratings yet
PD
76 pages
Advanced Asic Chip Synthesis
PDF
No ratings yet
Advanced Asic Chip Synthesis
126 pages
Digital VLSI Design Lecture 3: Logic Synthesis: Semester A, 2018-19 Lecturer: Dr. Adam Teman
PDF
No ratings yet
Digital VLSI Design Lecture 3: Logic Synthesis: Semester A, 2018-19 Lecturer: Dr. Adam Teman
47 pages
SAED Digital Standard Cell Library - Rev1!4!20
PDF
No ratings yet
SAED Digital Standard Cell Library - Rev1!4!20
100 pages
Spy Glass Flowfor XILINXFPGA150701112421
PDF
100% (1)
Spy Glass Flowfor XILINXFPGA150701112421
6 pages
Synopsys
PDF
No ratings yet
Synopsys
308 pages
Sync Async
PDF
No ratings yet
Sync Async
31 pages
Low Power Vlsi Design: Assignment-1 G Abhishek Kumar Reddy, M Manoj Varma
PDF
No ratings yet
Low Power Vlsi Design: Assignment-1 G Abhishek Kumar Reddy, M Manoj Varma
17 pages
4 1.soc Encounter
PDF
No ratings yet
4 1.soc Encounter
181 pages
Asic Design Flow Tutorial 3228gl
PDF
No ratings yet
Asic Design Flow Tutorial 3228gl
138 pages
Vlsi/Fpga Design and Test CAD Tool Flow in Mentor Graphics
PDF
No ratings yet
Vlsi/Fpga Design and Test CAD Tool Flow in Mentor Graphics
20 pages
Mohanad - System On Chip (SOC)
PDF
No ratings yet
Mohanad - System On Chip (SOC)
25 pages
Lecture 1 Introduction 2018 19 PDF
PDF
No ratings yet
Lecture 1 Introduction 2018 19 PDF
36 pages
Resume Arvind Shrivastava
PDF
No ratings yet
Resume Arvind Shrivastava
2 pages
Basic Timing Constraints Tutorial
PDF
100% (3)
Basic Timing Constraints Tutorial
34 pages
Cadence Digital Design Synthesis Flow
PDF
No ratings yet
Cadence Digital Design Synthesis Flow
22 pages
Setup and Hold Violations
PDF
100% (1)
Setup and Hold Violations
9 pages
MACDONALD TIMINGCLOSURE FINAle
PDF
No ratings yet
MACDONALD TIMINGCLOSURE FINAle
18 pages
Full Flow Clock Domain Crossing - From Source To Si: March 2016
PDF
No ratings yet
Full Flow Clock Domain Crossing - From Source To Si: March 2016
13 pages
Advanced Tech Mapping Supp4
PDF
No ratings yet
Advanced Tech Mapping Supp4
8 pages
Design of Very Large Scale Integration Circuits
PDF
No ratings yet
Design of Very Large Scale Integration Circuits
22 pages
SystemVerilog 3.0d.VCS7.1 Beta PDF
PDF
No ratings yet
SystemVerilog 3.0d.VCS7.1 Beta PDF
314 pages
Lec01 Verilog Combinational Circuits Design
PDF
No ratings yet
Lec01 Verilog Combinational Circuits Design
61 pages
Digital Logic Synthesis and Equivalence Checking Tools
PDF
No ratings yet
Digital Logic Synthesis and Equivalence Checking Tools
33 pages
ASIC Tape Out: Designers' Perspective: C-DAC All Rights Reserved C-DAC/TVM/HDG/Aug'12
PDF
No ratings yet
ASIC Tape Out: Designers' Perspective: C-DAC All Rights Reserved C-DAC/TVM/HDG/Aug'12
62 pages
Notes7 Applying Synthesis Constraints
PDF
No ratings yet
Notes7 Applying Synthesis Constraints
18 pages
Bertolone Introduction To The Digital Flow in Mixed Environment Back End
PDF
No ratings yet
Bertolone Introduction To The Digital Flow in Mixed Environment Back End
89 pages
Lec04 Advanced Sequential Circuit Design
PDF
No ratings yet
Lec04 Advanced Sequential Circuit Design
54 pages
Formality Basic Lab Instruction
PDF
No ratings yet
Formality Basic Lab Instruction
9 pages
dftxg1 PDF
PDF
No ratings yet
dftxg1 PDF
514 pages
Input Files Required
PDF
No ratings yet
Input Files Required
3 pages
Primetime Synopsys
PDF
No ratings yet
Primetime Synopsys
48 pages
Lib Char CT Seminar
PDF
No ratings yet
Lib Char CT Seminar
75 pages
Mvsim Pag
PDF
No ratings yet
Mvsim Pag
16 pages
Chapter2 Clocks Resets-04
PDF
No ratings yet
Chapter2 Clocks Resets-04
8 pages
Verilog Tutorial
PDF
No ratings yet
Verilog Tutorial
92 pages
Data Transfer Between Asynchronous Clock Domains
PDF
No ratings yet
Data Transfer Between Asynchronous Clock Domains
10 pages
Verilog Tutorial: Chin-Lung Su
PDF
No ratings yet
Verilog Tutorial: Chin-Lung Su
42 pages
IBM ASIC-SoC Methodology
PDF
No ratings yet
IBM ASIC-SoC Methodology
12 pages
EDA Introduction: Professor: Sci.D., Prof. Vazgen Melikyan
PDF
No ratings yet
EDA Introduction: Professor: Sci.D., Prof. Vazgen Melikyan
23 pages
SpyGlass Lint
PDF
No ratings yet
SpyGlass Lint
4 pages
RTL Logic Synthesis Tutorial
PDF
No ratings yet
RTL Logic Synthesis Tutorial
7 pages
Low Power Verification Methodology Using UPF: Freddy Bembaron Sachin Kakkar
PDF
No ratings yet
Low Power Verification Methodology Using UPF: Freddy Bembaron Sachin Kakkar
6 pages
Design Planning For Large SoC Implementation at 40nm - Part 3
PDF
No ratings yet
Design Planning For Large SoC Implementation at 40nm - Part 3
17 pages
Tutorial For Design Compiler: Shell Prompt
PDF
No ratings yet
Tutorial For Design Compiler: Shell Prompt
7 pages
New PD 1
PDF
No ratings yet
New PD 1
78 pages
DSD Lec 1
PDF
No ratings yet
DSD Lec 1
32 pages
Vdocuments - MX Overview of Digital Ic Design Flow University of Overview of Digital Ic Design
PDF
No ratings yet
Vdocuments - MX Overview of Digital Ic Design Flow University of Overview of Digital Ic Design
12 pages
Week 14 A
PDF
No ratings yet
Week 14 A
28 pages
Mips Fpga Lab
PDF
No ratings yet
Mips Fpga Lab
10 pages
UI-L2
PDF
No ratings yet
UI-L2
24 pages
ASIC Flow
PDF
No ratings yet
ASIC Flow
14 pages
TRADITIONAL Asic Design Flow
PDF
100% (1)
TRADITIONAL Asic Design Flow
24 pages
Lect - 03 Full Custom Vs Standard Cells
PDF
No ratings yet
Lect - 03 Full Custom Vs Standard Cells
53 pages
Unit 1 Notes VLSI
PDF
No ratings yet
Unit 1 Notes VLSI
10 pages