IC Validator User Guide
IC Validator User Guide
Contents
New in This Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15
Related Products, Publications, and Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Customer Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Statement on Inclusivity and Diversity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1. IC Validator Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Using the IC Validator Tool in Your Design Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Running the IC Validator Tool Within the IC Compiler Tool . . . . . . . . . . . . . . . . 19
Running the IC Validator Tool Within the IC Compiler II Tool . . . . . . . . . . . . . . . 20
Running the IC Validator Tool With StarRC Parasitic Extraction . . . . . . . . . . . . 20
Running the IC Validator Tool From the Custom Compiler Platform . . . . . . . . . 21
Setting Up and Running DRC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Setting Up and Running LVS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Command-Line Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
General Command-Line Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Compiler Directives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Support for 64-Bit Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .43
Run-Only Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Environment Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Providing Additional Information to the Compiler . . . . . . . . . . . . . . . . . . . . . . . . 47
Creating Preprocessor Definitions From Environment Variables . . . . . . . . . . . . 48
Pruning Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Prioritizing the Scheduling of Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Runset Insertion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Exit Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Inlined Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Distributed Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .54
Specifying Host Names and CPU Counts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Modifying Resources for a Running Job . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Running in Platform LSF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Running in Univa Grid Engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3
Feedback
Contents
License Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Runset Caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Cache Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Configuring Runset Caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .60
Matching a Runset to a Cache File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Using IC Validator Command-Line Options With Runset Caching . . . . . . . . . . . . . . 63
Compatible Command-Line Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .63
Command-Line Options That Affect Runset Caching . . . . . . . . . . . . . . . . . . . . .63
Incompatible Command-Line Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Layer Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .64
Runset Layer Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Data Layer Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Using PXL Runset Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .68
3. Output Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Description of Output Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
General DRC and LVS Output Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Results File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Error File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Summary File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
4
Feedback
Contents
Rules File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Technology File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Tree Structure Output File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Virtual Cell File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Pair Section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Set Section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Explode Section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Distributed Log File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .91
Distributed Statistics File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Distributed Work Directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Graphical Error Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
LVS-Specific Output Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
LVS Debugging File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Extracted Layout Netlist File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .94
Extracted Layout SPICE Netlist File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .94
Account File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Bounding Box File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Equivalence Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Device Leveling Summary File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .95
Compare Log File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Compare Tree Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Compare Directory Output Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Individual Equivalence Summary File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .96
Partitioned Schematic Netlist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Partitioned Layout Netlist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Editing Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
5
Feedback
Contents
7. PXL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
PXL Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
6
Feedback
Contents
7
Feedback
Contents
8
Feedback
Contents
9
Feedback
Contents
10
Feedback
Contents
11
Feedback
Contents
12
Feedback
Contents
13
Feedback
Contents
14
Feedback
Conventions
The following conventions are used in Synopsys documentation.
Convention Description
Courier bold Indicates user input—text you type verbatim—in examples, such
as
prompt> write_file top
Edit > Copy Indicates a path to a menu command, such as opening the Edit
menu and choosing Copy.
Customer Support
Customer support is available through SolvNetPlus.
Accessing SolvNetPlus
The SolvNetPlus site includes a knowledge base of technical articles and answers to
frequently asked questions about Synopsys tools. The SolvNetPlus site also gives you
access to a wide range of Synopsys online services including software downloads,
documentation, and technical support.
To access the SolvNetPlus site, go to the following address:
https://solvnetplus.synopsys.com
If prompted, enter your user name and password. If you do not have a Synopsys user
name and password, follow the instructions to sign up for an account.
If you need help using the SolvNetPlus site, click REGISTRATION HELP in the top-right
menu bar.
1
IC Validator Basics
This chapter gives a brief overview of the IC Validator physical verification tool. It also
describes the IC Validator command-line options and distributed processing.
The basics of the IC Validator tool are described in the following sections:
• Using the IC Validator Tool in Your Design Flow
• Command-Line Options
• Compiler Directives
• Exit Codes
• Inlined Functions
• Distributed Processing
• Runset Caching
• Using IC Validator Command-Line Options With Runset Caching
• Layer Mapping
• Using PXL Runset Encryption
◦ Layer assignments
◦ Database checks, such as snap and grid checks
◦ Design rule checking (DRC), electrical rule checking (ERC), fill patterns, or layout-
versus-schematic (LVS) checking
• Run IC Validator DRC and ERC on your layout database, verifying your design against
the design rule document.
• Review and debug errors generated in the IC Validator DRC and ERC run using
IC Validator error files or VUE.
• Run the IC Validator tool to perform fill-pattern generation, including basic DRC checks
on the layers that are filled.
• Run IC Validator LVS checking.
• Review and debug errors generated in the IC Validator LVS run using IC Validator error
files or VUE.
See Appendix A, IC Validator Architecture.
runset.
• signoff_autofix_drc. Performs automatic DRC repair. In this flow, the IC Validator signoff_metal_fill, IC Compiler command
tool.
◦ Performs signoff DRC checking.
◦ Generates route guidance that assists the router in performing targeted repairs.
◦ Validates the repair done by the router.
• signoff_metal_fill. Inserts metal and via fill in either pattern-based mode, which
signoff_metal_fill, IC Compiler command
• report_metal_density. Calculates the metal density and density gradient values and
annotates the values on the design.
Note:
You must make sure that the IC Validator tool can parse and run the runset
before using it within the IC Compiler tool.
See the IC Compiler Implementation User Guide, which is available on SolvNetPlus, or the
IC Compiler man pages for detailed command syntax and usage.
DRC runset.
• signoff_fix_drc. Performs automatic DRC repair. In this flow, the IC Validator tool
signoff_fix_drc, IC Compiler II command
Note:
You must make sure that the IC Validator tool can parse and run the runset
before using it within the IC Compiler II tool.
See the IC Compiler II Implementation User Guide, which is available on SolvNetPlus, or
the IC Compiler II man pages for detailed command syntax and usage.
◦ pex_via_layer_map()
◦ pex_marker_layer_map()
◦ pex_remove_layer_map()
◦ pex_viewonly_layer_map()
◦ pex_ignore_cap_layer_map()
Table 1 shows the LVS settings that are not supported for use in the IC Validator–StarRC
flow. These settings should be set to false.
Table 1 LVS Settings Not Supported for IC Validator–StarRC Flow
See the IC Validator Reference Manual for detailed syntax and usage of the parasitic
extraction functions.
Command-Line Options
The IC Validator command-line syntax is
line.
If an error occurs, an exit code is reported in cell.err
exit code cell.err file
IC Validator User Guide 23
S-2021.06-SP2
Feedback
Chapter 1: IC Validator Basics
Command-Line Options
Option Definition
-64 Runs the IC Validator tool with 64-bit coordinates support. See
Support for 64-Bit Coordinates for more information.
This option is available with Elite licensing. See Which IC
Validator Licenses Do I Need? for information about IC
Validator licensing.
-bct option -bct command-line option Blocks forward data flow analysis through the outputs of
specified commands. The selections for option are:
• extent. The extent functions whose output is blocked are
cell_extent(), cell_extent_layer(), layer_extent(),
and chip_extent().
• connect. The connect functions whose output is blocked
are connect(), incremental_connect(), stamp(), and
stamp_edge().
• both. The output of both extent and connect functions is
blocked.
• none. Forward data flow analysis is not blocked. This value is
the default.
The -bct command-line option must be used with the -il or
-iln command-line options. You can use all three options
together.
The selection of the specified commands, however, is not
blocked. For example, if there is a data flow path from an
incremental assign layer to a check that requires output from a
connect() function, the connect() functions are not blocked
even if the -il layer_name -bct connect command-line
options are used.
Option Definition
-C-C command-line option Runs only a netlist comparison between the existing schematic
netlist and a previously generated layout netlist, using the
information specified in the runset. DRC and extraction are not
done.
This command-line option is intended for rerunning compare
when the design data has not changed but some compare
settings are changed.
Do not place compare-related functions, such as extract
functions, filter, and merging, in a conditional statement if its
execution depends on some condition that is derived by layers.
In this situation, the data dependency triggers the restart of data
creation, and the IC Validator tool reruns almost everything in
the runset.
Note:
For more information on using the -C command-line option
with the layer_empty() function, see Forcing Runtime
Conditional Results.
-c cell -c command-line optioncell, overriding with command-line option Overrides the cell argument in the library() function and the
cell name, command-line option
-cache-only -cache-only command-line option-cache-only command-line option Generates a cache file in the local cache directory for a runset
without running it through the engine.
Option Definition
-clf file -clf command-line option Allows you to add command-line options to your IC Validator
run using a text file. It performs a left-to-right replacement with
the options from the specified file. This command-line option
takes only one file, but you can use multiple -clf options on the
command line.
The content of the text file, such as wildcard expansion and the
line continuation character (\), are not processed by the UNIX
shell.
For example,
% icv -clf normal.clf -il "one"
-clf select_violation_rules.clf drc22nm.rs
The IC Validator tool expands text in the file that has a $ sign at
the start of a variable as an environment variable. For example,
the MY_INCLUDE environment variable set on the command
line as
% setenv MY_INCLUDE /u/path
could be used in the text file to set the path for runset include
files:
-I $MY_INCLUDE/include
-cpp outputfile Reads the input runset, writes a preprocessed version of the
runset to the output file, and then immediately terminates. All
preprocessor directives and macros are fully processed in the
output runset, and all included user (non-IC Validator) files are
expanded. All comments are removed from the output file and
encrypted sections are preserved.
Option Definition
-create_lvs_short_output Creates VUE output and generates LVS Short Finder output.
[integer|ALL] You can specify to report only a set number of shorts or all
shorts. The default is 200.
A cellname.vue file is created. The output is in the
./run_details/short_out_db directory.
When there is a set limit, the shorts are reported in the following
order, with the lowest level within the hierarchy reported first:
1. Shorts between power and ground
2. Shorts between power and power, or between ground and
ground
3. Shorts between power and signal, or between ground and
signal
4. Shorts between signal and signal
This output can also be enabled by using the
create_lvs_short_output argument of the error_options()
function. However, you can only change the number of shorts
using the command-line option.
-D name[=value] -D command-line option Defines symbolic names with an optional value. The default
symbolic namesdefine, command-line option
is 1.
Note:
When using the -D command-line option to pass string
values (icv -D FOO="BAR"), you must ensure the quotation
marks are not removed by the shell (icv -D FOO='"BAR"'),
or put-D FOO="BAR" into a clf file.
-debug [-debug_source file] -debug command-line option Runs the IC Validator PXL Debugger. See IC Validator PXL
Debugger.
Option Definition
-dvp LVS | ALL Extracts ALL (the default) properties or the ones used only
by LVS compare. Use this command-line option to overwrite
init_device_matrix(device_properties = ) in the runset.
-e file -e command-line optionequiv options, augmenting with command-line option Adds the equiv_options() and lvs_black_box_options()
function calls in the specified file into the runset. The file can
contain one or more calls to these functions.
The only IC Validator functions that can be in the file are the
equiv_options() and lvs_black_box_options() functions,
but other valid PXL commands can be used. The following
example shows the use of a foreach loop:
equiv_cells : list of string = {"cellA", "cellB",
"cellC", "cellD", "cellE", "cellF"};
foreach( cell in equiv_cells ){
equiv_options({{cell, cell}});
}
-ece -ece command-line option Enables the reporting of additional error codes to the shell. See
Exit Codes.
See the documentation for the shell you are using regarding
how to capture the error code status.
-email dashboard Sends an email to the user using ICV Dashboard every time the
percent complete line is updated in the icv.log file.
-ep prefix -ep command-line optioncell name prefix, overriding with command-line option Overrides the prefix for all error cell names specified in the write
functions.
-ex -ex command-line option Runs device extraction only. The compare() function is not
invoked, and as a result, all functions that use the compare()
function output are not run.
-exit -exit command-line option Terminates a running IC Validator process and provides partial
results.
The -exit command-line option
• Allows IC Validator to clean up before exiting. This includes:
- Any file on a local disk or network disk
- Any file in /dev/shm
• Allows IC Validator to generate partial results. The
LAYOUT.ERRORS and RESULTS files contain a banner
indicating that IC Validator exited early.
-explorer_lvs Runs the Explorer mode for LVS. See IC Validator Explorer
LVS for more information.
Option Definition
-explorer_lvs_cells [AUTO | ALL] Specifies which cells to check in Explorer LVS. See IC Validator
Explorer LVS for more information.
-explorer_lvs_preserve_cells Specifies the cells that are to be preserved from being deleted
[preserve_cell_list] or black-boxed in Explorer LVS so that those cells can be
included in Explorer LVS. See IC Validator Explorer LVS for
more information.
-explorer_script path_to_file Specifies the file used to add resources after tier 0 and tier
1 are completed. See IC Validator Explorer DRC for more
information.
-f format -f command-line optionlibrary format, overriding with command-line option Overrides the library format (GDSII, Milkyway, NDM, OASIS, or
library format, command-line option
-filecache -host_disk path Uses memory and the local disk to store the files. By default,
the IC Validator tool uses available memory to minimize NFS
usage.
Note:
If the local disk is full, the IC Validator tool uses NFS.
Option Definition
-g directory -g command-line optiongroup path name, overriding with command-line option Overrides the group path name specified in the group_path
argument of the run_options() function. The IC Validator
tool does not allow you to set this value to the current working
directory (.).
This command-line option takes only one directory. If you use
multiple -g options, the IC Validator tool uses the directory from
the last one.
Note:
The directory provided to this command-line option must
be visible to all hosts. If you provide a local cache directory,
see the -host_disk command-line option.
-hm_cell cell_name Enables the generation of violation heat maps for cells which
match the given cell name patterns, in addition to the top cell. If
any specified cell is exploded due to performance optimization,
the heat map is not be generated for that cell.
See the heat_map_cells argument of the error_options()
function for more information.
Note:
The -hm command-line option must also be provided for any
heat maps to be generated.
Option Definition
-host_init hostname Starts an IC Validator job on a single host or multiple hosts. See
| number_of_cpus | Specifying Host Names and CPU Counts for more information.
hostname:number_of_cpus | LSF | Examples:
SGE | NB
• -host_init hostname. Runs on the given host and uses all
of the CPUs.
• -host_init number_of_cpus. Runs on the local host and
uses the specified number of CPUs.
• -host_init hostname:number_of_cores. Runs on the
given host and uses the specified number of CPUs.
• -host_init LSF. Automatically determines host and number
of CPUs from Platform LSF environment.
• -host_init SGE. Automatically determines host and number
of CPUs from the Univa Grid Engine environment
• -host_init NB. Automatically determines host and number
of CPUs from the NetBatch environment.
Note:
When using the hostname and number_of_cpus syntax, the
argument to the -host_init command-line option can be
specified multiple times, for example,
-host_init hosta:8 hostb:8
-host_add hostname Adds more hosts (including remote hosts) to an IC Validator job
| number_of_cpus | that is already running. The usage is the same as the usage
hostname:number_of_cpus | LSF | is the same as -host_init. See Specifying Host Names and
SGE | NB CPU Counts for more information.
-host_remove list_of_hosts Removes the list of hosts from the IC Validator job.
All IC Validator processes and commands linked to the running
job are shut down for the listed hosts (if this host was acquired
through a host reservation system, the host can be released).
-host_elastic LSF | SGE | Automatically adds or removes hosts (including remote hosts)
user_command to or from an IC Validator job that is already running. This option
enables the functionality and specifies the method for acquiring
new hosts. This option is specified when a job is originally
started or applied to a running job. See Specifying Host Names
and CPU Counts for more information.
Examples:
• -host_elastic LSF. Uses Platform LSF to add hosts.
• -host_elastic SGE. Uses Univa Grid Engine to add hosts.
• -host_elastic user_command. Runs the specified
command to add hosts. The command can include options
and must include an icv -host_add invocation.
Option Definition
-host_disk Specifies a local group directory for all hosts specified with
-host_init or -host_add.
Examples:
• icv -host_init hostA:32
-host_disk /SCRATCH/user/group_1
• icv -host_add hostB:32
-host_disk /SCRATCH/user/group_1
Note:
If the local disk space becomes 100 percent full while
the tool is running, the tool no longer uses the local
directory path, but instead uses only the network path of the
run_details directory.
-host_memory Memory_Value M|G|T Attempts to maintain the user-specified target memory for
the hosts used in the IC Validator job by rescheduling and
threading. This switch cannot be used to reduce the memory of
a single command.
-host_memory 1000M
-host_memory 100G
-host_memory 10T
Option Definition
-i name -i command-line optionlibrary name, overriding with command-line option Overrides the library names specified in the library_name and
merge_libraries arguments of the library() function. When
you use the -i command-line option, the library_name and
merge_libraries arguments are completely redefined; none of
the original values remain.
You can specify multiple -i command-line options. The
library_name is replaced by the first -i command-line
option. The merged libraries are created from subsequent -i
values, with the first merged library created from the second
-icommand-line option. The format for the merge libraries is
set to the same format as for the main library. There is no layer
mapping.
command-line option takes only one path, but you can use
multiple -I options on a command line.
When you specify multiple paths, order matters. For example,
you have the file aaa.rh in both my_path1 and my_path2.
• When you specify -I my_path1 -I my_path2, the aaa.rh
file from my_path1 is used.
• When you specify -I my_path2 -I my_path1, the aaa.rh
file from my_path2 is used.
Include directories are searched after the current working
directory is searched.
If an include directory is not in the install directory, then for the
-I command-line option, you must specify the absolute path of
the directory.
-icvread -icvread command-line option Enables the creation of an ICVread matrix. See the
init_icvread_matrix() function description. If you use the
icvread() function in your runset, you do not need to use this
command-line option.
Option Definition
-il "layer ... layer" -il command-line optionincremental layer checking Defines the incremental layers to be used in an
incremental-by-layer run by layer name.
• Put the set of layer names within quotation marks.
• Specify multiple names with spaces as separators.
For example,
-il "POLY"
-il "POLY DIFF METAL"
Note:
Layer names specified in the name argument of any assign
function are used for matching incremental assign layer
names.
When the name argument of an assign function is not
used, the layer name for the assign is from the left side
assignment, for example, layer_name = assign(...).
If more than one layer name is specified, incremental
processing occurs if one or more layer names are found in the
ASSIGN section. Layer names that are not found are ignored.
If none of the specified layers are found in the ASSIGN section,
the IC Validator tool exits with an error message.
The violations that are based, directly or indirectly, on the
incremental layers are executed in the IC Validator run. The
other violations are not executed in the IC Validator run.
-keys key(s) Defines the keys that control environment variable access.
Option Definition
-layer_debugger Enables the storing of layer debugger output without the use
of the VUE. For compatibility with VUE, this command-line
option enables the creation of VUE output (as if by the -vue
command-line option). Also, to match the VUE layer debugger
behavior, runset selection is performed based on the union
of commands selected by each of the following categories:
violation comment (-svc and -uvc), violation name (-svn and
-uvn), function name (-sfn and -ufn), and run-only selection.
-lf layermappingfile -lf command-line optionlayer mapping filespecifying Specifies the layer mapping file. This file maps the runset layer
numbers to the corresponding layer numbers in the library. See
layer mapping filemapping file, layer
Layer Mapping.
-lic_elite
-lic_nxt command-line optionlicensingtechnology Forces use of Elite licensing scheme. See Elite Licensing.
-lic_base
-lic_ultra command-line optionlicensingfunction Forces use of Base licensing scheme. See Base Licensing.
-ln filename
-ln command-line optionfile names, overriding with command-line option Overrides all of the file names specified in the layout_file
argument of the read_layout_netlist() function.
You must use the -ln command-line option in conjunction with
the -lnf command-line option.
-lnf SPICE | VERILOG | ICV -lnf command-line optionformats, overriding with command-line option Overrides all the formats specified in the layout_file
argument of the read_layout_netlist() function.
-milkyway_version -milkyway_version command-line option Prints Milkyway schema versions supported by the current
IC Validator version.
-ml filename -ml command-line option Puts into the runset a call to the
format_merge_library_options() function that is
defined in filename. The format can be milkyway, ndm,
or openaccess. The syntax for this file is exactly that of
the format_merge_library_options() function. Any
format_merge_library_options() function in the runset is
overwritten.
Option Definition
-ndg -ndg command-line optionsmall-design processingdesigns, small Enables use of streamlined processing flow, which bypasses
some stages of runset compilation, generation of dependencies
between functions, and runset optimizations that require
dependency analysis. Without dependency generation, the
IC Validator tool cannot run in distributed processing mode, and
all functions in the runset are executed sequentially on a single
host.
Use the -ndg option with small cells, where the incremental cost
of running all functions is significantly lower than the time and
memory needed to fully optimize a complex runset.
When using the -ndg option, any user function that invokes
an inline runset function must be tagged with the inline
qualifier. In general, IC Validator OPTIONS and ASSIGN
section functions fall into this category. See Inlined Functions
for a complete list of the runset functions that are tagged with
the inline qualifier. See the inline qualifier description in
Qualifiers.
When using the -ndg option, you cannot use the dual-hierarchy
extraction flow.
Option Definition
-ndm_version -ndm_version command-line option Prints NDM schema versions supported by the current
IC Validator version.
-ned Overrides the report_error_details argument of the cell name, command-line option
-nho -nho command-line option Turns off hierarchy optimization. See the optimizations for hierarchy, command-line optionhierarchy optimizationscommand-line option
hierarchy_options() function.
-norscache -norscache command-line option Disables runset caching. See Runset Caching on page 59
for more information.
-nro -nro command-line option Disables runset optimizations.runset optimization, turn off with command-line option
-nvn -nvn command-line optionnetlist-versus-netlist flow Enables the IC Validator tool to run a netlist-versus-netlist
(NVN) flow without a runset. When this option is used, any
runset on the IC Validator command line is not read.
-nvn_netlist_mode -nvn_netlist_mode command-line option Specifies the netlist-versus-netlist (NVN) flow mode as
layout-versus-layout (LVL) or schematic-versus-schematic
(SVS). When set to LVL, the valid pin and device names used
in both input netlists are always referenced to layout names in
the runset. When set to SVS, the names are always schematic
names. When this command-line option is not specified, the pin
and device names are schematic and layout, respectively.
-nwl NetTran outputs the wireLog file from the schematic, and the
engine reads this file to resolve the text short.
Option Definition
-nwl wire_log_file -nwl command-line optionwireLog file Provides a wireLog file to guide IC Validator on how to resolve
shorts. Overrides the wireLog file generated automatically by
NetTran during IC Validator run.
You can generate a wireLog file from your schematic netlist.
See the “Creating a WireLog File Using NetTran” in the
IC Validator LVS User Guide for more information about how
you can manually generate a wireLog file running NetTran with
its command-line option.
The wireLog file allows the IC Validator tool to use the same net
as NetTran when there is a short. The IC Validator tool assigns
the newNetName to the net.
Example of wireLog file syntax:
# format: cellName origNetName newNetName
add4 VDD2 VDD
cs_add VDD5 VDD
cs_add n36 VDD
nor2b VDD6 VDD5
-oa_cell_map cellmap -oa_view command-line optionview overriding with command-line option Overrides the cell_mapping_file argument specified in the
openaccess_options() function.
-oa_color_mapping option Specifies how to use the locked and unlocked anchor bit for the
OpenAccess color layer mapping. The selections for option are
• ignore. Uses the color mappings for which the locked or
unlocked keyword is not specified, and ignores the color
mappings for which locked or unlocked is specified.
• strict. Uses the color mappings for which the locked
or unlocked keyword is specified, and ignores the color
mappings for which locked or unlocked is not specified.
• full. Sets all shapes that are read with color to locked. The
mapping for this setting is the same as the mapping for the
compatibility setting.
• compatibility. Follows the industry standard. Ignores color
mappings for which the colorAnnotate keyword is located
after the color specification. When no color mapping rule
matches a color shape, the mapping is used without color.
Option Definition
-oa_layer_map layer_mapping_file Overrides the layer_mapping_file argument setting specified -oa_layer_map command-line optionlayer mapping fileOpenAccessoverriding with command-line option
-oa_lib_defs filename -oa_lib_defs command-line optionlibrary definition file, overriding with command-line option Overrides the OpenAccess library_definition_file
argument setting specified in the library() function.
-oa_object_map filename -oa_view command-line optionview overriding with command-line option Overrides the object_mapping_file argument specified in the
openaccess_options() function.
-oa_version -oa_version command-line option Prints the current OpenAccess dynamic library version.
-oa_view view_name -oa_view command-line optionview overriding with command-line option Overrides the view argument specified in the
openaccess_options() function.
-p path -p command-line optionlibrary path, overriding with command-line option Overrides the library path specified in the library() function.
library path, command-line option
-pcr outputfile
-pcr command-line option -pc command-line option Reads the runset and writes compiler-checked PXL output
after function inlining and loop unrolling. The output can be
pruned by using violation selections, run-only execution, or the
incremental-by-layer -il and -iln command-line options. See
Run-Only Execution for more information.
-progress_script Specified script will be run every time the percent complete line
is updated in the icv.log file.
-prune_assign -prune_assign command-line option The IC Validator tool reads only the required runset layers from
input design, and creates the required runset layers.
When this command-line option is not used, the IC Validator
tool reads all the runset layers from input design. Whether
or not you use the -prune_assign command-line option,
unnecessary runset layers are not reported in the summary file.
See Pruning Layers for information about the prune_assign
compiler directive.
Option Definition
-rd directory -rd command-line optionrun details directory, overriding with command-line option Specifies the run_details directory.run details directory, command-line option
-ro -ro command-line option Enables run-only processing. See Run-Only Execution.
-rofl file:line Enables run-only command selection based on the file and line
number.
-runset_config config_file Specifies the runset configuration, which can be used to change
options and compare functions.
The configuration file might contain options and compare
functions which can change the behavior of the runsets. The
options functions behave as if they were inserted into the
runset between the end of the options and the beginning of the
assigns. The compare functions behave as if they were inserted
into the runset just before the compare() function, with the
exception of the compare() function itself, which is modified by
the contents of the configuration file.
In general, variables are not supported, except in the case of
the few required arguments in compare functions in which they
are ignored, for example, the compare matrix.
-s schematic -s command-line option Overrides the file name specified in the schematic_file
argument of the schematic() function. The
schematic_library_file argument of the schematic()
function is not overwritten.
When you use the -s command-line option, you must also use
the -sf command-line option.
If the schematic file is not found when LVS compare is run,
the IC Validator tool exits with a message stating that it cannot
open the file to read.
Option Definition
-sf SPICE | VERILOG | ICV -sf command-line option Specifies the input schematic netlist format, overriding the
format specified in the schematic_file argument of the
schematic() function. The schematic_library_file
argument of the schematic() function is not overwritten.
-stc cell -stc command-line optiontop cell, overriding with command-line option Overrides the top cell in the schematic netlist, overriding the
schematic_top_cell argument of the compare() function and
the cell argument of the schematic() function.
-uvc violation_comment -uvc command-line option Selects the violations that are executed during the run. There
are three categories of selection rules: select or suppress
by violation comment (-svc and -uvc), by violation name
-svn violation_name
(-svn and -uvn), and by function name (-sfn and -ufn). The
-svn command-line option
arguments
• Can include the wildcard * and range expressions using [ ].
-sfn function -sfn command-line option
-time_limit hours Supports exiting with partial results after a user-specified time
limit is reached (one-hour minimum).
Note:
This command-line option requires the Elite license.
-U name -U command-line option Undefines symbolic names that were previously defined with
the -D command-line option. symbolic namesundefine, command-line option
-usage -usage command-line option Displays usage message and exits.
Option Definition
-V-V command-line option Prints the program version. program version, command-line option
-verbose Prints the output of the individual commands to the screen while
the IC Validator tool is running rather than only percentage
complete messages. The information printed is the same as
the information reported to the summary file. However, the
information is not displayed in the same order. The screen
output reports the information in the order of processing by the
IC Validator tool, while the content of the summary file is sorted
to match the sequential order of the runset.
-Version -Version command-line option Prints version numbers for the individual components and the
version numbers, command-line option
-vfn function_name -vfn command-line option Reports evaluated parameter values of the specified runset
-vfn ALL function in the summary file, and on the screen when the
-verbose option is used. This command-line option takes only
one function name but you can use multiple -vfn options on a
command line. You can also use the -vfn ALL option to report
the values for all runset functions.
The message contains the function name, actual value of each
parameter of the function, and the return type of the function.
This information is useful for debugging the input parameter
value of any function call. Here is an example of the output of
the -vfn library option:
<runset>:<line_no>: library (
library_name = CELL_LEVEL
format = MILKYWAY
cell = TOP
library_path = <path>
library_definition_file =
magnification_factor = 1.
) returning void
-vue -vue command-line option Creates VUE output. Creating VUE output can also be
VUEcommand-line option
Option Definition
-vueshort [integer|ALL] -vueshort command-line option Extracts either a specified number of shorts or all shorts. The
VUEextract short data, command-line option-vueshort command-line option
default is 200.
The extracted data is in the ./run_details/short_out_db directory.
When there is a set limit, the shorts are reported in the following
order, with the lowest level within the hierarchy reported first:
1. Shorts between power and ground
2. Shorts between power and power, or between ground and
ground
3. Shorts between power and signal, or between ground and
signal
4. Shorts between signal and signal
You can also set the shorts to be extracted using
the following arguments of the text_net() function:
create_short_finder_nets and short_debugging =
{vue_short_finder=true, output_limit=integer}.
Compiler Directives
Center the die about the origin to maximize available coordinate space, as shown in
Figure 2. Using the -64 option is not needed to prevent an overflow.
The IC Validator error database is enhanced with a variable-width coordinate format that
can accommodate either 32-bit or 64-bit integer coordinates. This applies to both the
error databases (PYDB) and the error classification databases (cPYDB), which may be
exported for DRC error classification.
The pydb_export utility is enhanced with the ability to convert error classification
databases (cPYDB) among the various coordinate formats to maintain compatibility as
necessary.
Note:
Current and future IC Validator tools retain support for reading error databases
with the older fixed 32-bit or 64-bit coordinate formats. However, IC Validator
tools prior to version L-2016.06 will not be able to read the new coordinate
format.
Run-Only Execution
Follow these steps for run-only execution.
1. Identify the sections of the runset for which you want the desired runset checks with
these directives:
#pragma runonly begin
#pragma runonly end
You can define multiple run-only sections within a runset.
2. Call the IC Validator tool with -ro, the run-only command-line option, to
a. Parse the runset.
b. Prune the runset functions that do not have data flow into run-only code sections.
c. Execute the pruned runset.
Note:
If the IC Validator tool does not find a run-only section within a runset, the
runset is not executed.
3. Alternatively, to stop execution after the runset is parsed and pruned for run-only
execution, use the -pcr command-line option along with -ro. For example,
% icv -ro -pcr outfile
4. Use the -nro command-line option when using pruned run-only output as the source
runset. If you do not use the -nro option, default pruning optimizations can eliminate
functions inside run-only code sections.
Note:
All #pragma run-only directives remain in the pruned run-only output.
The presence of #pragma run-only directives in the pruned run-only output
enables the icv -ro runonly.rs command line to be used for execution of
the output pruned runonly.rs runset.
Environment Variables
The $ sign preceding a variable indicates the variable is an environment variable. For
example,
$ENV_ABC
The variable must be set in the environment; otherwise a parse error occurs. All
environment variables are considered strings. For example,
if( $ENV_ABC == "XYZ") {
x = abc(XYZ);
}
else {
x = abc(NOT_XYZ)
}
• Print a list containing the names and the values of each specified environment variable.
#pragma envvar report variable ... variable
Note:
If there is more than one #pragma envvar default statement that defines the
same variable, subsequent occurrences see the variable as already set in the
environment, as set by the first occurrence.
The preprocessor definition named by cpp_var is given the value of the environment
variable named by env_var, as if by a #define command. If the environment variable is
not set, no action is taken, such that cpp_var remains undefined or retains its previous
definition.
The preprocessor #pragma define_from_envvar takes two arguments:
• The first argument is the name of the preprocessor definition to redefine if the
environment variable is set.
• The second argument is the name of the environment variable to get the value from.
#include <icv.rh>
#define str(x) #x
#define qstr(x) str(x)
#if defined(FOO_DEF)
note("Before pragma, defined as: " + qstr(FOO_DEF));
#else
note("Not defined before pragma");
#endif
#if defined(FOO_DEF)
note("After pragma, defined as: " + qstr(FOO_DEF));
#if FOO_DEF == 2
note("... and satisfies test for == 2");
#elif FOO_DEF == 3
note("... and satisfies test for == 3");
#endif
#else
note("Not defined after pragma");
#endif
Notice the original value is 2 because of the preprocessor definition that passed on the
command line. The define_from_envvar pragma redefines FOO_DEF to a value of 3.
If FOO_ENV is not set, FOO_DEF is has a value of 2. If FOO_DEF is also not defined on the
command line, it remains undefined for the remainder of the preprocessing.
Use the #if definition(identifier, string, case_sensitivity) function
in conjunction with the define_from_envvar pragma to support the querying of the
$envvar_name environment variable to determine if it has a value of string.
In the user environment, set this:
setenv BEOL_STACK 9M_3Mx_2Fx_2Dx_1Gx_1Iz_LB
Pruning Layers
The #pragma prune_assign compiler directive tells the IC Validator tool to read only the
required runset layers from input design and create the required runset layers. You can
use this compiler directive to enable or to disable pruning:
#pragma icv compiler [enable|disable] prune_assign
When this compiler directive is not used or is disable and the -prune_assign command-
line option is not used, the IC Validator tool reads all of the runset layers from the input
design. In all cases, unnecessary runset layers are not reported in the summary file. The
setting of the prune_assign directive in the runset has priority over the -prune_assign
command-line option.
#pragma prioritize
not_rectangles(M5);
The #pragma prioritize directive increases the priority for both of the
not_rectangles() functions, as shown in the following example:
#pragma prioritize
ERR_M4 @= { @ "M4 non-rectangles";
not_rectangles(M4);
not_rectangles(M5);
}
Runset Insertion
Runset insertion allows you to customize the main runset using secondary runsets without
the need for modifying the main runset. If the main runset includes the necessary pragma
statements to indicate where code can be inserted, you can provide additional runset code
in a separate file.
The syntax in the main runset is #pragma icv insert <label>. This pragma can only be used
one time per <label>. All of these pragmas must be on their own line.
After the insertion pragma is specified, you can mark a block of code by setting #pragma
icv label <label> begin and #pragma icv label <label> end. These pragmas must come
after the corresponding insert pragma. They cannot be nested or overlap. Multiple blocks
of code using the same <label> are allowed and are inserted into the same destination in
the order they are defined.
Exit Codes
Table 3 lists the general exit codes.
Table 3 General Exit Codes
Inlined Functions
When you run the IC Validator tool with the -ndg command-line option,
• Any user function that invokes an inline runset function needs to be tagged with the
inline qualifier. See Table 5 for a list of inline runset functions. For example,
my_options : inline function ( void ) returning void
{
. . .
error_options(...);
gds_options(...);
. . .
}
• User functions that call runset functions with cell lists and call runset functions used to
construct cell lists also must be tagged with the inline qualifier. See Table 6 for a list
of functions. For example,
add_strings_to_list : inline function (string_list : in_out list of
string, new_strings : list of string) returning void
{
string_list = string_list.merge(new_strings);
}
If a user function needs to be tagged with the inline qualifier but it does not have the
qualifier, the runtime error is:
error: unable to extract cell list for no explode options
Table 5 is a list of the runset functions that are tagged with the inline qualifier.
Table 5 Runset Functions Tagged With the inline Qualifier
assign() assign_edge()
assign_openaccess() assign_openaccess_edge()
assign_openaccess_text() assign_text()
equiv_options() error_options()
exclude_milkyway_cell_types() exclude_milkyway_net_types()
exclude_milkyway_route_guide_layers() exclude_milkyway_route_types()
exclude_ndm_blockage_types() exclude_ndm_design_types()
exclude_ndm_net_types() exclude_ndm_shape_uses()
gds_options() get_netlist_connect_database()
get_top_cell() hierarchy_auto_options()
hierarchy_options() incremental_options()
init_device_matrix() instance_property_number()
layout_drawn_options() layout_grid_options()
layout_integrity_by_cell() layout_integrity_by_marker_layer()
layout_integrity_options() library()
library_create() library_import()
lvs_black_box_options() lvs_options()
milkyway_merge_library_options() milkyway_options()
ndm_options() ndm_merge_library_options()
net_options() oasis_options()
openaccess_options()
pattern_options() prototype_options()
read_layout_netlist() resolution_options()
run_options() text_options()
Table 6 lists the runset functions with cell lists and functions used to construct cell lists.
Table 6 Runset Functions With Cell Lists and Used to Construct Cell Lists
cell_extent() cell_extent_layer()
compare() copy_by_cells()
create_ports() flatten_by_cells()
text_origin()
Distributed Processing
You can run the IC Validator tool in a variety of different distributed processing situations,
including using multiple CPUs on a single host and using multiple hosts.
Running the IC Validator tool with distributed processing provides the following features:
• Scalability increases with the number of CPUs used
• Flexibility for adding CPUs to an existing IC Validator job
• Simultaneous multithreading
For example, to use 8 CPUs on a single host, use the following command-line option:
icv -host_init 8 runset.rs
It is possible to specify which host to run on. In the preceding example, the host name
was omitted, so the local host was assumed. If you specify a different host name, the
IC Validator tool starts on that host. In the following example, 8 CPUs are used on hostA.
icv -host_init hostA:8 runset.rs
Because a remote host is specified, the IC Validator tool must make a connection to that
host. By default, the tool attempts to use the rsh command to make this connection. If a
different connection method is desired, use the -host_login command-line option (some
IT environments may disable rsh access or require a different connection method). In the
following example, 8 CPUs are used on hostA and ssh is used to connect to the host.
icv -host_init hostA:8 -host_login /bin/ssh runset.rs
You can list more than one host. In this case, the IC Validator tool uses all of the hosts
listed, with the number of CPUs specified. In the following example a total of 32 CPUs
will be used: 8 CPUs on the local host; 8 on hostA; and 16 CPUs on hostB (because the
-host_login command-line option is NOT specified, rsh connects to the hosts).
icv -host_init 8 hostA:8 hostB:16 runset.rs
was specified in the initial run using -rd, this directory must also be specified on this run
using -rd. The following example adds 16 CPUs on hostC to the running job.
icv -host_add hostC:16
Note:
When using -host_add, most of the other command-line options are
unnecessary and now allowed. See -host_add in Table 2 for more information.
Note:
When using -host_add, most of the other command-line options are
unnecessary and now allowed. See -host_add in Table 2 for more information.
You can also remove hosts from a running job. Hosts are specified using the
-host_remove command-line option. It is only possible to remove entire hosts. The
following example removes all of hostC (all of its CPUs) from the running job.
icv -host_remove hostC
You can instruct IC Validator to automatically add and remove hosts from the running job.
Hosts are added when too many commands are queued and are removed when the hosts
are idle. This can be specified when the job is originally started or after the job is running.
You must provide the command that IC Validator should use to acquire a new host. The
command must include an icv -host_add on the acquired host.
The following example uses the Univa Grid Engine qsub command to acquire a host as
specified when IC Validator needs it.
icv -host_elastic "qsub -m abe -pe mt 4 -V -cwd -P bshort \
-b y icv -host_add SGE"
Similarly, to add 16 CPUs on one host to a running IC Validator job, use the -host_add
command-line option. For example,
bsub -n16 -R "span[hosts=1]" <bsub options> icv -host_add 16
As a convenience, the IC Validator tool can automatically determine the host configuration
and sets -host_login to blaunch. To enable this feature, specify LSF as the as the
argument to -host_init. In the following example, 16 CPUs are requested on a single
host using bsub and are identified using the LSF argument of the -host_init command-
line option.
bsub -n 16 -R "span[hosts=1]" <bsub options> icv -host_init LSF
runset.rs
This feature is most useful when requesting jobs on multiple hosts. The following example
works with multiple hosts as well (2 hosts, 16 CPUs each):
bsub -n 32 -R "span[hosts=2]" <bsub options> icv -host_init LSF
runset.rs
The LSF argument also works with the -host_add command-line option. In this example, a
total of 64 CPUs are added to a running job: 8 hosts each with 8 CPUs.
bsub -n 64 -R "span[hosts=8]" <bsub options> icv -host_add LSF
You can automatically add and remove hosts by using the LSF keyword for
-host_elastic:
icv -host_elastic LSF
The arguments in <> are set automatically by the tool and are not available to the user.
If this default setting does not work, it is a good starting point for creating a usable LSF
command. For example, if your environment requires a project and span directive, you can
use:
icv -host_elastic "bsub -p bnormal -R 'span[hosts=1]' \
-o elastic.out.txt -e elastic.err.txt -n 8 -J icv.elastic \
env icv -host_add LSF"
Similarly, to add 16 CPUs on one host to a running IC Validator job, use the -host_add
command-line option. For example,
qsub -pe mt 16 <qsub options> icv -host_add 16
As a convenience, the IC Validator tool can automatically determine the host configuration
and sets -host_login to qrsh -inherit. To enable this feature, specify SGE as the
argument to -host_init. In the following example, 16 CPUs are requested on a single
host using qsub and are identified using the SGE argument of the -host_init command-
line option.
qsub -mt 16 < qsub options> icv -host_init SGE runset.rs
This feature is most useful when requesting jobs on multiple hosts. The following example
works with multiple hosts as well (2 hosts, 16 CPUs each):
qsub -pe dp16 32 <qsub options> icv -host_init SGE runset.rs
The SGE argument also works with the -host_add command-line option. In this example, a
total of 64 CPUs are added to a running job: 8 hosts each with 8 CPUs.
qsub -pe dp8 64 <qsub options> icv -host_add SGE
You can automatically add and remove hosts by using the SGE keyword for
-host_eastic:
icv -host_elastic SGE.
The arguments in <> are set automatically by the tool and are not available to the user.
If this default setting does not work, it is a good starting point for creating a usable SGE
command. For example, if your environment requires a different project and you want 16
CPUs, you can use:
icv -host_elastic "qsub -S /bin/sh -o elastic.out.txt \
-e elastic.err.txt -pe mt 16 -V -cwd -P blarge -b y icv -host_add SGE"
License Requirements
For Elite licensing, see Table 7 for examples of the number of licenses needed.
Table 7 Elite Licensing for Multicore Distributed Processing
Metal fill on 9 cores 3 licenses First two licenses enable 4 cores each, plus 1
core from third license
For more information about IC Validator licensing, see the Licensing and Resource
Requirements chapter in the IC Validator User Guide.
Runset Caching
The IC Validator runset caching feature reduces the startup time of the IC Validator tool.
After a runset is parsed and optimized, the internal data structures for the runset are saved
to a runset cache file. Subsequent IC Validator runs that use the same runset load the
runset cache file rather than parsing and optimizing the runset again.
You can turn runset caching off by using the -norscache command-line option. You might
want to use this option when debugging a run.
Note:
If you use runset caching with parasitic extraction (PEX) functions for multiple
corners, the IC Validator tool does not flag any file naming conflicts until
runtime.
See Using IC Validator Command-Line Options With Runset Caching on page 63 for
information about using command-line options with runset caching.
Cache Files
When the IC Validator tool writes a runset cache file, the following messages are written to
screen indicating the path of the cache file:
Saved runset to cache: /remote/us54h1/myname/.icvrscache/
e3b32d5de337d888106672bbba370041-rod-3908.rscache
Parsing Finished
Runset Compile Time=0:00:03 User=3.37 Sys=0.16 Mem=0.182 GB
When the IC Validator tool reads a runset cache file, the following messages are written to
the screen indicating the path of the cache file that was read:
Loaded runset from cache:
/remote/us54h1/myname/.icvrscache/
e3b32d5de337d888106672bbba370041-rod-3908.rscache
Cached Runset Compile Time=0:00:00 User=0.25 Sys=0.04 Mem=0.108 GB
You can determine which cache file in your cache directory is associated with a specific
runset. When you look at the content of the cache file, you'll see something like this text:
Cached runset file created from ICV runset:
/remote/seg-scratch/myname/S475264/pxl.ev
. . .
The second line is the full path of the runset that was used to generate the cache file.
This cache directory is not user-specific. Multiple users can point to the same
shared cache directory. When enabled, the IC Validator tool performs the
following checks:
• Checks the shared cache directory for cache that either matches or misses
the runset criteria
• If missed, the IC Validator tool checks the local cache directory
◦ If matched, the IC Validator tool uses the local cache file.
◦ If missed, the IC Validator tool compiles and stores cache in the local
cache directory (local to the user).
You must move cache files to the shared directory manually so that they can be
used by multiple users.
ICV_RUNSET_CACHE_DIR
Sets the runset cache directory used by the IC Validator tool when storing cache
files.
% setenv ICV_RUNSET_CACHE_DIR directory
When the IC Validator tool is run, it does not delete any preexisting
runset cache files until either the ICV_RUNSET_CACHE_MAXFILES or the
ICV_RUNSET_CACHE_MAXSIZE limit is reached. When either limit is reached, the
IC Validator tool deletes files in the least-recently used order; that is, cache files
that have been loaded recently are retained while the older files are deleted.
ICV_RUNSET_CACHE_MAXSIZE
Limits the amount of disk space the IC Validator tool can use to store runset
cache files. By default, the IC Validator tool limits the space used for runset
cache files to 200 MB.
% setenv ICV_RUNSET_CACHE_MAXSIZE max_size
For example,
...
RUNSET-CACHE: cache directory : ./RSCACHE/
RUNSET-CACHE: cache max size : 307200
RUNSET-CACHE: cache max files : -1
RUNSET-CACHE: version string : <platform> <ICV version> <date>
RUNSET-CACHE: opendir() opened cachedir : ./RSCACHE/
RUNSET-CACHE: readdir() read filename :
3cb51a5a7c3d1da800be5d6ee862da29-rod-16372.rscache
RUNSET-CACHE: 0 cache files matched runset MD5:
1de3cbc1c76cd3f7b05877718fa83cae
RUNSET-CACHE: no matching cache file found!
ICV_DISABLE_RUNSET_CACHE
Disables runset caching.
% setenv ICV_DISABLE_RUNSET_CACHE
-uvc
-uvn
-verbose
Layer Mapping
There are two types of layer mapping:
• Runset Layer Mapping: This mapping modifies your runset to create a new runset.
• Data Layer Mapping: This mapping modifies the data as it is read in by changing the
layer and datatype attributes of the data.
You can use both types of mapping when you are running on multiple libraries.
on page 23.
The first line of the mapping file must always be
RUNSET(*,*) = Library(*,*)
This statement maps each layer to itself (identity mapping). This identity mapping ensures
that there is a mapping for each layer. Subsequent lines of the mapping file override the
identity map for selected layers.
For example,
Runset(*,*) = Library(*,*)
Runset(15,0) = Library(1,0)
Runset(16,0) = Library(2,0)
Runset(17,0) = Library(3,0)
• Runset(A,B) = Library(C1...Cm,D1...Dn)
In the following example, any assign reference to layer/datatype (5,0) is replaced with
the four layer/datatype pairs (10,0), (10,4), (12,0), and (12,4).
Runset(5,0) = Library(10 12,0 4)
• Runset(A1...Am,B1...Bn) = Library(C1...Cm,D1...Dn)
Note:
The number of Runset layer elements and Library layer elements must be
equal.
The number of Runset datatype elements and Library datatype elements
must be equal.
In the following example, four layer/datatype substitutions are made:
assign (2,6) is replaced with (10,0)
assign (2,8) is replaced with (10,4)
assign (4,6) is replaced with (12,0)
assign (4,8) is replaced with (12,4)
Runset(2 4,6 8) = Library(10 12,0 4)
means that assign layer 1, all datatypes, is replaced with layer 2, all datatypes.
• Layer/datatype ranges are specified by a dash. For example,
Runset(2 5-8,*) = Library(10-12 17 20,*)
In this example, the main and secondary libraries are specified in the library() function:
• The main library, one.oasis, is specified with the library() function. It has metal1 on
layer 41.
• A secondary library, two.oasis, has metal1 on layer 31.
• The runset assumes that metal1 is on layer 5.
The metal1 data on layer 31 in the two.oasis library is mapped to layer 41 as it is read in.
This mapping ensures that all input metal1 data is read onto layer 41. The runset layer
mapping then transforms the assign({{5}}) assignment into assign({{41}}) to ensure
that the run executes as expected. Here is a snippet of the runset:
library(library_name = "one.oasis",
cell = "A",
format = OASIS,
merge_libraries = {{library_name = "two.oasis", format=OASIS,
layer_map_file="two_layer_map"}});
metal1 = assign({{5}});
The two_layer_map file has the data layer mapping for two.oasis:
41 31
The runset layer mapping file, specified with the -lf command-line option, has this
mapping:
RUNSET(*,*) = LIBRARY(*,*)
RUNSET(5,*) = LIBRARY(41,*)
In the following example, the main library is specified in the library() function and the
secondary library in the library_import() function. Use this method to import multiple
libraries when you do not want cell merging to occur. Each library remains distinct from the
others so that your assigns functions can import data from one library but not another.
• The main library, one.oasis, is specified with the library() function. It has metal1 on
layer 41.
• A secondary library, two.gds, has metal1 on layer 31.
• The runset assumes that metal1 is on layer 5.
The two_layer_map file and the runset layer mapping are the same as in the previous
example. Here is a snippet of the runset:
lib1 = library(library_name = "one.oasis",
cell = "A",
format = OASIS);
cell = "A",
cell_prefix = "TWO_",
format = GDSII,
layer_map_file = "two_layer_map");
metal1 = assign({{5}});
In the following example, the main library is specified in the library() function and the
secondary library in the milkyway_merge_library_options() function. The libraries are
not merged nor kept separate. Milkyway cell data is replaced with the equivalent GDSII
data on a cell-by-cell basis. See the milkyway_merge_library_options() function for
more information.
• The main Milkyway library, one, is specified with the library() function. It has metal1
on layer 41.
• A secondary GDSII library, two.gds, has metal1 on layer 31.
• The runset assumes that metal1 is on layer 5.
The two_layer_map file and the runset layer mapping are the same as in the first example.
Here is a snippet of the runset:
library(library_name = "one",
cell = "A",
format = MILKYWAY,
library_path = "./");
milkyway_merge_library_options(
libraries = {{"one", "./", {{"two.gds", GDSII,
layer_map_file="two_layer_map"}}}},
missing_cell = ABORT);
metal1 = assign({{5}});
Partial constructs cannot be encrypted. For example, this partial construct is not allowed:
...
external2( l1, l2, distance < 2.0, extension = NONE,
#pragma PXL encrypt begin
look_thru = COINCIDENT);}
#pragma PXL encrypt end
...
The pxlcrypt utility automatically breaks encryption blocks around included files. For
example, if the runset has this code:
#pragma PXL encrypt begin
...
#include "file1.rh"
...
#pragma PXL encrypt end
2
Licensing and Resource Requirements
This chapter contains the setup instructions for the IC Validator tool. Make sure that
you have all the correct directories and files, that your environment accommodates the
IC Validator tool, and that you have incorporated the proper licensing information.
The licensing and resource requirements are described in the following sections:
• Licensing Schemes
• License Waiting
• Which IC Validator Licenses Do I Need?
• Resource Requirements
For additional information about Synopsys licensing and installing the IC Validator tool, see
the following documents:
• The Synopsys Licensing QuickStart Guide, available at
http://www.synopsys.com/Support/LI/Licensing/Pages.
• Information about installing the IC Validator tool is available at
installing IC Validator tool
http://www.synopsys.com/Support/LI/Installation/Pages/default.aspx.
Licensing Schemes
The IC Validator tool has two types of licensing schemes. The IC Validator tool is run using
one of these schemes:
• Elite licensing
• Base licensing
Elite licensing is the default scheme. If the job does not use license waiting or the
-lic_elite or -lic_base command-line options, the IC Validator tool first checks for Elite
licenses. If Elite licenses are not available, the tool checks for Base licenses. If neither of
these licenses are available, the job ends with a license denied error.
The two licensing schemes can exist on a system as available licensing schemes, but an
IC Validator job uses only one of them. They cannot be used together.
If two types of licenses are available, you can use the -lic_elite or -lic_base
command-line options to select the licensing scheme to use for a specific job. See
Command-Line Options on page 23 for more information.
License Waiting
The IC Validator tool supports license waiting in accordance with the standard Synopsys
license waitingICV_LICENSE_WAIT environment variable
Common Licensing (SCL) guidelines. To enable this function, you must set the
ICV_LICENSE_WAIT environment variable. No argument is required for this environment
ICV_LICENSE_WAIT environment variable
variable.
When you run the IC Validator tool with license waiting,
• If only Elite licensing is installed, the job queues for Elite licenses.
• If only Base licensing is installed, the job queues for Base licenses.
• If multiple licensing schemes are installed, the job queues for the first of Elite or Base
that is installed. Other schemes are ignored.
◦ If you use the -lic_elite command-line option, the job queues for Elite licenses
and the Base licenses are ignored.
◦ If you use the -lic_base command-line option, the job queues for Base licenses
and the Elite licenses are ignored.
Note:
There is no timeout when you use the ICV_LICENSE_WAIT environment
variable. The tool waits until a license becomes available.
Elite Licensing
Elite licensing uses the licenses as shown in Table 8. ICValidator-Manager is the main
license. It is checked out every time the IC Validator tool is run.
Table 8 Elite Licenses
PERC ICValidator2-GeometryEngine
Metal fill on 9 CPUs 3 licenses First two licenses enable 4 CPUs each, plus 1
CPU from third license
Base Licensing
Base licensing uses the licenses as shown in Table 10. ICValidator-Manager-2020 is the
main license. It is checked out every time the IC Validator tool is run.
1. Only checks for the presence of a valid license. No license is checked out.
2. Only checks for the presence of a valid license. No license is checked out.
PERC ICValidator2-GeometryEngine
Resource Requirements
The initial database file and the files that the IC Validator tool creates and uses determine
the system requirements. With some knowledge of the design to be checked, you can
estimate physical memory, disk space, and possible swap space requirements for your swap spacesystem requirementsswap space
IC Validator runs.
A contrib directory, which is included with IC Validator releases, provides scripts for
analyzing IC Validator performance. The contrib directory is at the top level of the release
install directory. These scripts are not supported. See the README file in the contrib
directory for more information.
Space Requirements
The IC Validator program performs layout checking operations on group files that are group files
created from the design database and stored in the group directory. The IC Validator tool
first creates the primary group files, one file for each layer in an assign function of the
runset. Each primary group file is a database describing that layer’s existence throughout
the design, maintaining all hierarchy. As the IC Validator run progresses, output files are
generated from certain checks. These outputs are also written as group files. For example,
the files in a group directory could be:
HRCHY.ERROR inst_indx.dat pgate.group toxcont.group
cont.group met1.group poly.group via.group
gate.group met2.group psel.group well.group
inst_data.dat ngate.group tox.group
A general rule is to have a minimum of two to three times the space of the initial database,
if it is in GDSII file format, reserved for group file creation, assuming the design has a
GDSII file, input database
good hierarchical style. A mostly flat design, or one with unwanted data interactions, might
flat designssystem requirementssystem requirementsflat database
require four to six times the initial database space for its group file creation. Thus, for a
GDSII file database of 10 MB, a minimum of 20 to 30 MB is required for group file creation
space, if the design is hierarchical in nature. By contrast, at least 40 to 60 MB are required
for a flat design.
In addition to the group files, the other significant output files from the IC Validator run
are the run summary files and the output database files. The summary files are the output database filesdatabase files, output cell.LAYOUT_ERRORS
file
cell.LAYOUT_ERRORS and cell.sum files. These files are ASCII log files that document cell.sum file
the results of the IC Validator run on the cell named in the runset.
The size of the cell.LAYOUT_ERRORS file and the output database depends on the
number of errors encountered in the IC Validator run. As with the group files, if the design
is hierarchical, the graphical output database is smaller than the graphical output for a
graphic output database, smaller with hierarchical input database
more flat design. The graphical output database is smaller because the IC Validator tool
only needs to report errors internal to repeated modules one time, while for a flat design
performance tipshierarchical database produces smaller error output
Memory Usage
You can manage the memory space needed for error output reporting by using a logical
approach to locating errors. Start by running the IC Validator tool on small modules of
the design and address errors as they are encountered. After the errors are fixed, run
the IC Validator tool on hierarchically higher modules. Using this procedure keeps your
cell.LAYOUT_ERRORS file and output database reasonably small. Working up through
the hierarchy in this way manages the use of memory needed for error output storage system requirementsreducing memory with bottom up error fixingerrors, fixing
space.
The IC Validator tool also requires physical memory and swap space to complete its swap spacesystem requirementsswap space
checks. The tool attempts to load into memory all data required to complete the checks
currently being performed. Any overflow of physical memory uses swap space. Generally
speaking, any time the IC Validator program needs to access swap space, you can expect
performance degradation, especially if swap space is distributed over a large network with
considerable network traffic. If enough physical memory is available to load all data and performance tipsprovide enough memory to avoid accessing swap space
complete a check without accessing swap space, you can expect maximum throughput.
A machine with three to four times as much memory as the GDSII file size should perform
well with a hierarchical design. Flat designs require more hardware. Set the swap space
so that any overflow of physical memory does not exhaust the swap space and cause the
IC Validator run to terminate.
The output cell.sum file includes a list of the memory used and the number of page faults page faults
required to complete each check. If you see that certain checks require more memory than
the machine has allocated, causing large amounts of data to be swapped out to disk, you
might need a machine with more memory for future runs. Example 1 shows an example
of the reported memory usage. The following section, Runtime and Memory Use Report,
gives information about the reported runtimes and memory usage.
Inputs: gate_e.edgelayer.0001
Outputs: gate_gt_3_5.edgelayer.0001
8 unique edge sets written.
Check time = 0:00:00 User=0.00 Sys=0.00 Mem=0.015 GB
external2() at sf.rs:166
external2(gate_le_1_5, POLY, distance < 0.15, extension = RADIAL,
look_thru = COINCIDENT)
Comment: "R.POLY.G1: Min Space POLY to Gate for Gate Width <=1.5 must be > 0
.15"
Function: external2
Inputs: gate_le_1_5.edgelayer.0001, POLY.polygonlayer.0002
WARNING - 6 spacing violations found.
Check time = 0:00:00 User=0.00 Sys=0.00 Mem=0.016 GB
external2() at sf.rs:169
external2(gate_gt_1_5_le_2_5, POLY, distance < 0.25, extension = RADIAL,
look_thru = COINCIDENT)
Comment: "R.POLY.G2: Min Space POLY to Gate for 1.5 < Gate Width <=2.5 must
be
> 0.25"
Function: external2
Inputs: gate_gt_1_5_le_2_5.edgelayer.0001, POLY.polygonlayer.0002
WARNING - 6 spacing violations found.
Check time = 0:00:00 User=0.00 Sys=0.00 Mem=0.016 GB
external2() at sf.rs:172
external2(gate_gt_2_5_le_3_5, POLY, distance < 0.35, extension = RADIAL,
look_thru = COINCIDENT)
Comment: "R.POLY.G3: Min Space POLY to Gate for 2.5 < Gate Width <=3.5 must
be
> 0.35"
Function: external2
Inputs: gate_gt_2_5_le_3_5.edgelayer.0001, POLY.polygonlayer.0002
WARNING - 3 spacing violations found.
Check time = 0:00:00 User=0.00 Sys=0.00 Mem=0.016 GB
external2() at sf.rs:175
external2(gate_gt_3_5, POLY, distance < 0.45, extension = RADIAL,
look_thru = COINCIDENT)
Comment: "R.POLY.G4: Min Space POLY to Gate for Gate Width > 3.5 must be > 0
.45"
Function: external2
Inputs: gate_gt_3_5.edgelayer.0001, POLY.polygonlayer.0002
WARNING - 6 spacing violations found.
Check time = 0:00:00 User=0.00 Sys=0.00 Mem=0.016 GB
the local machine results in faster completion. Enough disk space local to the machine on
which the job is run should be available to accommodate the Milkyway library (or, GDSII
or OASIS files), the group files being created, and all ASCII or physical error files being
created.
summary file for each stage of an IC Validator run. Note that some processing falls outside
of the cumulative memory reporting.
Note:
Generally, only memory allocated by the IC Validator tool is reported in the
peak usage or accumulative memory usage. For example, Milkyway memory
is allocated by the Milkyway library, so it is not included in the report of IC
Validator peak memory usage or machine accumulative memory usage.
Table 12 shows the definitions of the times and memory usage terms used in the summary
file.
Table 12 Terms Used in the Summary File
Term Definition
• When the IC Validator tool is run with the -rscache option for runset caching, the
report looks like this:
Cached Runset Compile Time=0:00:04 User=0.81 Sys=0.25 Mem=0.203 GB
3. Preprocessing stages
• Hierarchy reading and optimization stage: The Milkyway and OASIS memory
usage is not accounted for in the peak memory report.
Oasis memory used: 27.0M
Milkyway memory usage: 4714.789M
Reading hierarchy Time=0:00:10 User=8.29 Sys=2.87 Mem=0.207 GB
Updating hierarchy Time=0:00:00 User=0.07 Sys=0.00 Mem=0.056 GB
Exploding Time=0:00:00 User=0.44 Sys=0.02 Mem=0.057 GB
• Regioning stage
Determine Region Time=0:00:00 User=0.02 Sys=0.00 Mem=0.059 GB
4. Command execution
• Command statistic report
Check Time=0:00:00 User=0.00 Sys=0.00 Mem=0.017 GB
• text_net()
Total Text Time=0:00:00 User=0.05 Sys=0.00 Mem=0.064 GB
• extract_devices()
Device Connect Time=0:00:04 User=1.41 Sys=2.84 Mem=0.147 GB
• netlist()
Netlisting Time=0:00:03 User=1.63 Sys=0.68 Mem=0.069 GB
• write_milkyway(): The Milkyway memory usage is not accounted for in the peak
memory report.
Milkyway memory usage: 4129.207M
Compression Time=0:05:07 User=275.34 Sys=27.80 Mem=3.529 GB
Writing Time=0:10:32 User=516.55 Sys=47.36 Mem=3.812 GB
• write_oasis(): The OASIS memory usage is not accounted for in the peak memory
report.
• write_gds():
Writing Time=0:00:00 User=0.00 Sys=0.00 Mem=0.025 GB
5. Host summary
• Per host summary of total runtime and peak memory usage. This time excludes
memory usage for reading and writing Milkyway data and OASIS files.
Host number 0 total check Time=0:04:40 User=182.60 Sys=37.42
Mem=0.791 GB
• Per host summary of total runtime. This time includes preprocessing and peak
memory usage but excludes memory usage for reading and writing Milkyway data
and OASIS files.
Host number 0 total check and preprocess Time=0:04:56 User=195.38
Sys=40.62 0.791 GB
• Overall runtime and peak memory report that is for all stages and functions.
This report excludes memory usage for reading and writing Milkyway data and
OASIS files.
Overall engine Time=0:06:12 Highest command Mem=0.791 GB
• Master memory usage. This usage is not accounted for in the peak memory
report.
Overall Master Mem=0.246 GB
Operating Systems
The IC Validator tool supports many hardware architectures and operating systems. See
the installation guide for a list of the supported operating systems:
installation guide, link to
http://www.synopsys.com/Support/Licensing/Installation/Pages/default.aspx
System Limits
If system limits are set incorrectly, you might encounter an error during the IC Validator
run. For example, the following commands set the proper system limits for an IC Validator
run on a Linux operating system. (The numerical values are in kilobytes.)
unlimit filesize
unlimit datasize
limit descriptors 1024
limit stacksize 8192
Note:
The descriptors setting controls how many files you can have open at any time
during a run.
To verify the limits on a host, run the limit shell command.
Networked Environments
Accessing files over a networked environment, such as when a remote group directory
is used, can degrade performance. You might observe large differences in the wall time
(overall time) and total CPU time (user time plus system time) reported by the IC Validator
tool.
Note:
These differences could also be caused by:
- Other processes running on the same system.
- System overhead.
The read- and write-transfer sizes (rsize and wsize) of the NFS mounts should be at
NFS settings
3
Output Files
This chapter provides an overview of IC Validator design rule checking (DRC) and layout-
versus-schematic (LVS) output files.
Results File
The IC Validator tool creates a RESULTS file, cell.RESULTS, that provides the results from
key areas of the run and a summary of the search and grep key results. The RESULTS
file also contains the IC Validator command line options used for the run. This file is the
starting point for analyzing the results of an IC Validator run.
exception of functions without a rule. These functions are reported (indented by two
spaces) under the default rule name, “Violation.” Violation counts are preceded with
the “v =” string for easy searching of a specific violation count.
• Results Summary: LVS Run
This section consists of twp parts:
◦ The first part is the table of LVS compare statistics. This section reports the
following LVS compare statistics: LVS compare results, top equivalence points,
number of checked black-box cells, number of passed black-box cells, number
of failed black-box cells, number of checked equivalences, number of successful
equivalences, and number of failed equivalences. See the cell.LVS_ERRORS file
for the detailed compare violation data.
◦ The second part lists the LVS results of all equivalence and black-box pairs; the first
being the failed equivalences and black-box cells, followed by passed equivalences
and black-box cells. You can search or grep the list to quickly find the results of
equivalence and black-box cells.
• Assign Layer Statistics
This section consists of two parts:
◦ The first part reports four items: total number of used assign layers, number of
empty used assign layers, number of non-empty used assign layers, and number of
pruned assign layers.
◦ The second part consists of a table showing each layer name, hierarchical
and flat polygon counts for the layer (or “Pruned” if the layer was pruned), and
the layer or datatype assignments for the layer. This part is output only if the
verbose_results_file argument of the run_options() function contains the
ASSIGNS option.
The table is divided into three sections: 1) non-empty used layers 2) empty used
layers 3) pruned layers. The assign statement for each layer contains layer
datatype assignments and all other user-supplied assignment options.
• Run Configuration
This section consists of six parts:
◦ The run_options() function reports the verbose_results_file argument.
◦ The hierarchy_options() function reports the delete argument.
◦ The error_options() function reports the following items:
error_limit_per_check, match_errors, report_empty_violations, and
clean_classifications arguments.
Error File
The IC Validator tool creates an error file, cell.LAYOUT_ERRORS, during DRC or
netlist extraction runs. These types of errors are referred to as layout errors. The
LAYOUT_ERRORS file contains the results of a run, a summary list of the errors, and a
more detailed list of the errors.
The top of the file states either CLEAN or ERRORS so that you can quickly determine if
there are layout errors in the design.
The next section is the summary section. This section provides the comment for a
function, the function itself, and the number of errors for that function if it had errors.
The last section is the details section. Each error report contains the following: an
error description that identifies the check being performed; a structure name where the
error occurred; a position or a bounding box value where the error occurred; and other
information specific to a check. The position coordinates are reported hierarchically, that
is, relative to the named structure and only one time for each referenced structure. These
errors can also be viewed graphically using the VUE debugging tool.
During the run, the LAYOUT_ERRORS file is kept up-to-date with the results and the
summary sections. At the end of the run, a final version of the file is written including the
details section, as specified by error_options(report_error_details).
The following section of the LAYOUT_ERRORS file shows a minimum area rule for metal1
(Met1). The rule was coded as follows in the runset:
//R.Met1.A
{
@"R.Met1.A:For Metal 1, minimum area must be 0.04";
area( Met1, value < 0.04);
};
=========================================================================
ERROR DETAILS
-------------------------------------------------------------------------
R.Met1.A: Minimum area must be 0.04
-------------------------------------------------------------------------
sf.rs:238:area
- - - - - - - - - - - - - - - - - - - - - - - - - - - -
Structure ( lower left x, y ) ( upper right x, y) Area
- - - - - - - - - - - - - - - - - - - - - - - - - - - -
top (37.2700, 4.0300) (37.4800, 4.1700) 0.0294
Summary File
The IC Validator tool creates a summary file, cell.sum, during DRC or netlist extraction
runs.
If you are using the -verbose command-line option, the information printed to the screen
is the same as the information reported to the summary file. The screen output, however,
prints the information in the order of processing by the IC Validator tool, while the content
of the summary file sorted to match the sequential order of the runset.
Rules File
Rules to be executed are written to the cell.rules file in the run_details directory. Violation
names and violation comments that remain in the runset after all forms of runset selection
have been performed are reported in the cell.rules file. Examples of runset selection
include preprocessor directives, selectable rules, incremental layers, and run-only pruning.
Violation names and violation comments are harvested for all rules in the final
parse tree. Because runtime conditional commands (if/then/else statements and
for/foreach/while loops) might remain, rules in the final parse tree are not guaranteed
to execute. Rule output is included for all commands in the final parse tree, including
commands that do not execute due to runtime conditions.
Violation names and violation comments might not always be statically resolved. In this
case the file and line number are provided in the output file instead of the rule name.
Only the violation comment is present in the output if there is no “@=” violation block for
the runset rule.
The default comment is “Violation” and “@ \“Violation\”” appears in the rules to be output
when a rule uses the default comment.
The following is an example cell.rules file:
ICV_Master (R)
Copyright (c) 1996 - 2015 Synopsys, Inc.
This software and the associated documentation are proprietary to
Synopsys,Inc. This software may only be used in accordance with the terms
and conditions a written license agreement with Synopsys, Inc. All
other use, reproduction, or distribution of this software is strictly
prohibited.
Technology File
The IC Validator tool creates a technology file, cell.tech file, that displays the options which
are set for the IC Validator run. The list of options includes the default as well as the user-
specified values. The technology file contains three sections: exploded cells, deleted cells,
and placement explode statistics.
Pair Section
The Pair section reports the cell pairs that were created during each virtual cell pass
iteration. The iterate_max option of the pairs argument of the hierarchy_options()
function sets the maximum number of iterations.
Set Section
The Set section reports the sets that were created during each virtual cell pass iteration.
The iterate_max option of the sets argument of the hierarchy_options() function sets
the maximum number of iterations.
Explode Section
The Explode section of the output file displays all of the branches that were exploded
to bring together interacting placements under a common parent. For more information,
see the explode argument of the hierarchy_options() function in the IC Validator
Reference Manual.
The error database contains all the error data generated by violation blocks and all
the information needed for a VUE debugging session. By default, this database is in
run_details/pydb. You can change the location by using the db_path argument of the
error_options() function. This database allows for data mining, and you can create
custom reports.
Note:
You can start reviewing the error database before the IC Validator tool
completes the run.
The IC Validator tool generates a hierarchy of error structures that mimics the database
hierarchy that is being checked. Thus, any errors that are found in a particular structure
in the database appear in a corresponding error structure in the error hierarchy. Error
structures are generated only for the database structures that contain errors or that
reference other structures with errors. The IC Validator tool does not create an error
hierarchy if no errors were found.
Error structures are named with the default error prefix, ERR_. This prefix precedes
the structure name. For example, the default error structure for the cell CLOCK is
ERR_CLOCK. You can redefine the error prefix using the error_cell_prefix argument
in the write functions, such as the write_gds() function.
For example,
CHIP { ERR_CHIP {
ALU { ERR_ALU {
BIT0 ERR_BIT0
BIT1 } ERR_BIT1 }
CLOCK { ERR_CLOCK {
CLK1 ERR_CLK1
CLK2 ERR_CLK3 }
CLK3 } }
}
In the error hierarchy, there is no occurrence of the error structure ERR_CLK2. Either
the original CLK2 cells were exploded during preprocessing, or the structure CLK2 did
not contain any errors or structures that contain errors. If a previous IC Validator run
with errors is corrected and rerun, previous error hierarchy structures are retained. The
IC Validator tool does not overwrite these structures if no errors are found, but the new
error hierarchy does not have references to these previous error containing cells.
Account File
The IC Validator tool creates an account file, cell.acct, that lists the number of devices and
their types extracted from each cell in the hierarchy. The cumulative count shown is the
total count of all devices contained in that structure.
Equivalence Files
If you do not supply an equivalence file, then an LVS compare run automatically produces
an equivalence file that is used to compare the layout and schematic netlists. When
producing an equivalence file, the IC Validator tool recognizes that some equivalence
points are extraneous and do not aid in the comparison process. After all analysis is
completed, the IC Validator tool generates an equivalence file, equiv.run, that lists all the
equivalence cells that were used during the run. See the equiv_options() function in the
IC Validator Reference Manual for more information.
The device leveling summary file shows body layers for devices that cannot be pushed
back to their original cells after device recognition and property calculations. The device
body layers that are successfully pushed back are not shown in this file. The file shows
• Body Layers. Derived layer of device body that is leveled out of the cell in which it
originally resides.
• Cell. Original cell of the leveled devices.
• Layer Interacted. Layer that interacted with device body, causing device leveling. If
there is an interaction among body layer polygons from the same body layer across
multiple levels of hierarchy, the body layer name is printed in this column.
Editing Errors
After the error structure has been placed into the database, you can begin editing and
correcting the errors. The LAYOUT_ERRORS file lists the location and the structure name
for each error encountered. The summary file, cell.sum, lists the checks that have errors.
Because the output of the IC Validator tool is hierarchical, you need to correct an error only
one time, no matter how many times the structure is placed. You continue this process
until all errors are eliminated.
4
IC Validator Live DRC
This appendix discusses the IC Validator Live DRC tool usage within the
Custom Compiler, IC Compiler II, Fusion Compiler, and Cadence Virtuoso tools.
the need for a separate window to view violations. The following sections describe the
steps to set up IC Validator Live DRC, execute a run, and view violations.
To make the IC Validator Live DRC toolbar visible, select Window > Toolbars > DRC.
2. Open the IC Validator Live DRC Setup Window
To access IC Validator Live DRC using the Verification menu, select Tools >
Settings Description
Run Dir Select the directory in which you will run IC Validator Live DRC.
The output files will be stored in this directory.
Layout Format For the OpenAccess format, verification runs on the current cell
view.
Settings Description
For the Stream and OASIS formats, the tool exports Stream or
OASIS and runs IC Validator Live DRC on the generated GDSII or
OASIS file. Set the following:
• Format: OpenAccess, Stream, or OASIS: Exports the layout ce
llview used as input to the File you specify.
• Click the Config button to open the open the Export Stream
or Export OASIS dialog box. To set the options in the dialog
box, see the “Exporting Design Data in GDSII Stream Format”
or “Exporting Design Data in OASIS Format” sections of the
Custom Compiler Environment User Guide. The options you
specify are integrated with PVE (preferences in the pdk.xml
configuration).
To select specific rules from the runset (rule) file, click . In the Live DRC Rule
Selection dialog box, select the rules you want to use for the rule checking, and click
OK.
In the filter box, enter the full or partial name of the item or rule you want to select using
a simple string match, a glob-style pattern, or a regular expression which is controlled
by the Filters option in the General Options dialog box. To clear a filter box, click X as
shown in the text field. (See “Entering General Option Settings” section of the Custom
Compiler Environment User Guide).
or
db::getPrefValue dbExportOasisLayerMapFile
in the console.
When you right-click a selected IC Validator Live DRC job, a menu appears from which:
• Running jobs can be terminated using the Kill command
• You can view the output file (icvl_manager.log)
If the total number of polygons in the current view window is more than 50,000, a warning
appears because IC Validator Live DRC runtime performance can degrade beyond this
point. If you select, “Do not show this message again,” the future runs use IC Validator
Live DRC unless the maximum polygon count limit (currently 2,000,000) is reached. In this
case, IC Validator Live DRC cannot be run.
Additional Options
Additional options like halos and recipes allow you to better control the run. For example,
halos expand the data that is being checked to include some area immediately outside the
view window and recipes allow you to create and run subsets of rules that can be selected
from a pull-down menu.
Halos
To run IC Validator Live DRC on data that is immediately outside the data in the view
window, add a halo to the run. Adding a halo increases the size of the check window, as
shown in Figure 15. By expanding the data checked to include the polygons immediately
outside the view window, you can verify that design changes within the view window
did not create any new DRC violations with the polygons immediately outside the view
window.
The black window is the viewport and the red window is the viewport plus the halo. If IC
Validator Live DRC is executed with the halo setting:
• Only polygons that have some portion of their area inside the red window will be
included in the check. (Polygon 1 is included, but Polygon 2 is not).
• Only error shapes that overlap the black window will be included in the result (Error 1
included, but Error 2 is not).
You can define the halo range using IC Validator Live DRC by specifying the
xtICVLiveHaloRange preference. You can control whether IC Validator Live
DRC writes out two OA markers to indicate the checking range by specifying the
xtICVLiveDumpCheckingRange preference, in which case one marker shows the checked
viewport, and the other marker considers the halo range.
Recipes
Recipes are JSON text files that contain a subset of rules from the signoff runset.
Recipes allow you to run specific rules based on the design stage. For example, a recipe
containing spacing rules or rules related to front-end layers can be executed at the very
early stages of a design. Recipes with more advanced rules or rules related to back-end
layers can be executed toward the end of the design cycle. You can create new recipes
or modify existing recipes by setting the Recipe File path, which contains multiple recipes
that can be defined using the Recipe Editor.
2. Click , as shown in Figure 16, to choose a recipe from the Select Recipe File
dialog. Click OK to load the Arguments field with the selected recipe.
3. Choose <No-Recipe>, as shown in Figure 17, to remove the recipe settings from the
Arguments path and create a new recipe.
4. Check or uncheck the rules from the Rule Name list that you want to be included in
your recipe. Click to save a new recipe or the settings of an existing recipe. Click
to remove the existing recipe of the specified Recipe File.
IC Validator Live DRC runs on flat data. Therefore, you must use the
hierarchy_options() function to flatten the input to the IC Validator run. Although
IC Validator Live DRC does not clip polygons at the edge of the view window, it does
not report any violations outside of the view window. Therefore, the clip_window and
Alternatively, this command can be added to the .cdsinit file to automatically load the IC
Validator Live DRC toolbar in Virtuoso.
When Virtuoso is opened, the IC Validator Live DRC toolbar is displayed, as shown in
Figure 18. It might be necessary to stretch the toolbar so that all of the buttons can be
seen.
Click to start the IC Validator Live DRC Check (or keyboard shortcut, F12).
During the check, there is a modal dialog. Click Cancel to terminate the check, as shown
in Figure 20.
There are related messages in the Virtuoso command window. IC Validator Live DRC
messages are prefixed with "[ICV Live]". They communicate to the user what IC Validator
Live DRC is doing and give information about why a job may have terminated. Some
examples are shown in Figure 21.
IC Validator Live DRC also has a pop-up violation viewer with a detailed description of the
violated rules and more violation highlight options. The IC Validator Live DRC Violation
Viewer is shown in Figure 23. To bring up this violation viewer, click the icon in the IC
Validator Live DRC toolbar.
Additional Options
Additional options like halos and recipes allow you to better control the run. For example,
halos expand the data that is being checked to include some area immediately outside the
view window and recipes allow you to create and run subsets of rules that can be selected
from a pull-down menu.
Halos
Adding a halo increases the size of the check window, as shown in Figure 25. By
expanding the data checked to include the polygons immediately outside the view window,
you can verify that design changes within the view window did not create any new DRC
violations with the polygons immediately outside the view window.
The black window is the viewport and the red window is the viewport plus the halo. If IC
Validator Live DRC is executed with the halo setting:
• Only polygons that have some portion of their area inside the red window will be
included in the check. (Polygon 1 is included, but Polygon 2 is not).
• Only error shapes that overlap the black window will be included in the result (Error 1
included, but Error 2 is not).
Recipes
Recipes are JSON text files that contain a subset of rules from the signoff runset.
Recipes allow you to run specific rules based on the design stage. For example, a recipe
containing spacing rules or rules related to front-end layers can be executed at the very
early stages of a design. Recipes with more advanced rules or rules related to back-
end layers can be executed toward the end of the design cycle. To create a recipe in IC
Validator Live DRC, use the following features as shown in Figure 26.
• Rule Recipe: Defines the rules that are checked in the IC Validator Live DRC run. Live
DRC categorizes recipes into two types as shown in Figure 27.
◦ User recipes: Recipes in $HOME/.icvlive/recipes.
◦ Others recipes: Recipes in $ICVLIVE_RECIPE_PATH.
◦ Click : to open the Recipe Editor and edit the current selected recipe.
• File menu has three new actions: New, Rename, and Delete
• The Recipes menu lists all available recipes and allows you to switch between recipes.
Figure 29 Recipes
You can edit any files that have write permissions directly from the IC Validator Live DRC
Configuration Window.
After you set up a configuration, you can save it by clicking the save button and using the
pop-up dialog.
If this configuration is commonly used, it may be useful to load it automatically. You can do
this with the ICV_LIVE_CONFIG_FILE environment variable. Set this to a colon separated
list of files or directories. When the IC Validator Live DRC configuration menu is opened,
IC Validator Live DRC reads the configuration files in the ICV_LIVE_CONFIG_FILE path.
You can then select a configuration from the configuration drop-down list at the top of the
Configuration menu.
Add the runset and the layer map. The configuration drop-down menu displayed at the
top of the Configuration window can be used to switch between different configuration
settings.
Saving Configurations
To save configurations:
1. In the ICV Live Configuration, click OK or Apply.
2. Switch to another configuration using the Configuration drop-down list.
IC Validator Live DRC checks the write permissions of the configuration as follows:
a. If the unsaved configuration is writable, IC Validator Live DRC saves changes into a
writable configuration file.
b. If the unsaved configuration is read-only, a message appears to prompt you to save
the configuration in a new writable configuration file or overwrite an existing writable
configuration file.
{
"configurations": [
{
"checkFilter": {
"checkLayers": "Visible Layers Involved",
"excludeDensityRules": true,
"loadedRecipeFiles": [
"/path/to/beol_spacing.icvrcp",
"/path/to/beol_density.icvrcp"
],
"selectedRecipe": "Spacing Rules",
"selectedRecipeFile": "/path/to/beol_spacing.icvrcp"
},
"name": "BEOL Configuration",
"runSettings": {
"clfFile": "/path/to/beol_clf.txt",
"enableColoring": true,
"exportLayoutToOasis": true,
"haloSize": 1.5,
"layerMappingFile": "n7.layermap",
"objectMappingFile": "n7.objmap",
"runsetFile": "n7.rs"
}
},
{
"checkFilter": {
"checkLayers": "All Layers",
"excludeDensityRules": true,
"loadedRecipeFiles": [
],
"selectedRecipe": "",
"selectedRecipeFile": ""
},
"name": "All Layers ",
"runSettings": {
"clfFile": "",
"enableColoring": false,
"exportLayoutToOasis": false,
"haloSize": 1,
],
"selectedRecipe": "Custom Checks",
"selectedRecipeFile": "/path/to/feol.icvrcp"
},
"name": "FEOL Configuration",
"runSettings": {
"clfFile": "/path/to/feol_clf.txt",
"enableColoring": true,
"exportLayoutToOasis": true,
"haloSize": 1.5,
"layerMappingFile": "n7.layermap",
"objectMappingFile": "n7.objmap",
"runsetFile": "n7.rs"
}
}
],
"majorVersion": 1,
"minorVersion": 0
}
Bindkeys
IC Validator Live DRC bindkeys allow for quick access to common functions. There are
two default bindkeys that can be written at the end of the live.il file, which is located in
$ICV_HOME_DIR/etc/livedrc/virtuoso/live.il.
• Shift+F12 brings up the config window.
• F12 runs IC Validator Live DRC.
You can define bindkeys by using the hiSetBindKey() function to set the Layout, Key,
and SKILL function to call, as shown here:
• Fill icvlSetup(hiGetcurrentWindow()) to bring up the config window.
• Fill icvlExecute(hiGetCurrentWindow()) to run IC Validator Live DRC.
• Fill icvlBrowse(hiGetCurrentWindow()) to bring up violation viewer.
For example:
Set hotkey for IC Validator Live DRC configuration dialog and execution
hiSetBindKey("Layout" "Shift<Key>F12" "icvlSetup(hiGetCurrentWindow())")
hiSetBindKey("Layout" "<Key>F12" "icvlExecute(hiGetCurrentWindow())")
You must ensure that the version of the IC Validator executable that you specify is
compatible with the IC Compiler II or Fusion Compiler version that you are using. To
report the version compatibility, use the report_versions IC Compiler II or Fusion
Compiler command.
2. Set the application options for interactive signoff design rule checking.
At a minimum, you must specify the foundry runset to use for design rule checking
by setting the signoff.check_drc.runset application option and the layer
mapping file that maps the technology file layers to the runset layers by setting the
signoff.physical.layer_map_file application option.
Before you run interactive design rule checking, configure the run by setting the
application options for interactive signoff design rule checking, as shown in Table 13.
To set the application options, use the set_app_options command. To see the current
settings, use the report_app_options command.
Table 13 Application Options for Interactive Design Rule Checking
► Ensure that the layout window displays the objects on which you want to perform
design rule checking.
By default, interactive design rule checking uses the frame view of each cell. If you
need more detail than what is provided in the frame view, you can use the design
or layout view for specific cells. To specify the view used for specific cells, use the
gui_set_display_view command. For example, to use the layout view for all
hierarchical cells, use the following command:
fc2_shell> gui_set_display_view \
-cells [get_cells –hierarchical *] -view layout
For information about controlling the objects displayed for design rule checking, see
Displaying Objects for Design Rule Checking.
► Display the DRC toolbar by right-clicking in the GUI menu bar and selecting DRC
Toolbar, as shown in Figure 36.
► Configure the rules to use for design rule checking by choosing Edit > Options >ICV
► Run design rule checking by choosing Edit >ICV > Run ICV-Live on Current View or
by clicking the ICV Run icon in the DRC toolbar.
The first time that you run IC Validator Live DRC (or after changing layers or rules), the
tool caches the runset, which can take a few minutes. Subsequent runs do not perform
this caching and therefore are much quicker.
• Display the Hierarchy Settings panel by clicking the Hierarchy Settings icon in the
View Settings panel or the menu bar.
◦ Specify the number of view levels in the “View level” field.
◦ Select Standard, “Fill Cell,” and Block in the “Expanding cell types” section.
Figure 37 shows the View Settings and Hierarchy Settings panels used to control the
layout window display for design rule checking.
Note:
For more information about interactive signoff design rule checking, see the
Fusion Compiler™ User Guide or the IC Compiler™ II Implementation User
Guide, which are available on SolvNetPlus.
the IC Validator Live DRC runtime. To improve performance, prepare a cache directory
containing cache files generated during the design process. When the window changes in
the future, the IC Validator Live DRC feature can search for and reuse an older cache file
from this directory if the layers, rule-selection, and settings match the current run. These
cache directories can be shared with other users. To create runset cache directories and
reuse or share files:
1. Set up the local cache directory and run IC Validator Live DRC to compile the runset
with different configurations.
unset ICV_DISABLE_RUNSET_CACHE
setenv ICV_RUNSET_CACHE_DIR <first_user local>
// Start ICV Live
2. Transfer the cache files from <first_user local> to the <new_user> shared cache
directory, <new_user shared>.
3. Set up both local and shared cache directories in <new_user> to run IC Validator Live
DRC so that it can pick up the pre-compiled files as well as store newly created cache
files in the local path.
unset ICV_DISABLE_RUNSET_CACHE
setenv ICV_RUNSET_CACHE_DIR <new_user local>
setenv ICV_RUNSET_CACHE_DIR_SHARED <new_user shared>
// Start ICV Live
2. Create a config.rs file in your run directory using the following lines, and substitute the
arguments in select_window with the coordinates of the IC Validator Live DRC run
from the previous step:
hierarchy_options(
explode_all = {"*"}
);
incremental_options(
select_window = {{186.62, -33/817, 192.58, -31.087}},
clip_window = false,
window_error_filter = KEEP_ENCLOSED,
window_ambit = double
);
IC Validator Live DRC runs on flat data. Therefore, you must use the
hierarchy_options() function to flatten the input to the IC Validator run. Although
IC Validator Live DRC does not clip polygons at the edge of the view window, it does
not report any violations outside of the view window. Therefore, the clip_window and
window_error_filter arguments must be used with the incremental_options()
function.
3. Add this file to your IC Validator run by setting -runset_config config.rs.
4. Run IC Validator.
5
IC Validator Explorer DRC
Run Explorer DRC auto mode on all Tiers with User defined Tiers
specified
using -explorer_tiers_file mytier.txt
% icv -explorer auto -explorer_tiers_file mytier.txt runsetfile
Table 15 shows the different Explorer DRC tiers, pre-included checks, and customizable
categories.
Tier Can the user Can the user Included checks Optional checks
add checks to name this
this tier? tier?
Option Definition
-explorer_script path_to_file Specifies the file used to add resources after Tier 0 and Tier 1
are completed. You can start an auto run with fewer resources
and dynamically add resources when Tier 0 and Tier 1 are
completed. Use these resources for a faster turnaround time of
the remaining checks.
additional tiers, this command-line option adds additional resources following the basic
checks. An example of a script file looks like this:
#!/bin/csh -f
echo "Explorer Tier 0 and Tier 1 checks are done. Adding resources and
running remaining Tiers"
Category Description
Fill overlap diagnostic Specifies DRC errors that are diagnosed as fill overlap issues (runset
selected fill overlap rules).
Priority rules Specifies DRC errors that are available from Explorer priority checks
(runset selected basic rules).Some of the rules in this category
might get an EXPLORER prefix, which denotes that the rule has
been modified (faster version) from the original runset. The original
runset version runs as part of DRC Errors in the “All remaining rules”
category.
Priority Rules (user-defined) Specifies DRC errors available from user-defined priority rules.
Voltage dependent diagnostics Specifies Explorer methodology warnings that flag text-based voltage
conflicts on polygons or net objects. In addition, stray voltage texts and
voltage checks are included.
Explorer tier TierNumber Specifies DRC errors from unnamed tiers. By default, these errors are
categorized under Explorer tier, TierNumber. You can modify the tier
name using the NAME_TIER directive in the file.
All remaining rules Specifies the available DRC errors from checks that are classified as
predefined or user-defined tiers.
In addition to the error categorizations in Table 17, DRC errors are categorized by their
user-defined category names (defined using the NAME_TIER directive in the file).
Note:
Error categories are dependent upon the type of Explorer mode and diagnostics
flagged in that mode.
Note:
While storing error markers in the error database (PYDB), Tier 0 and Tier 1
(Explorer priority checks and user-defined priority checks) enforce the limit
set by the error_sampling_per_check argument (the default is 1 million).
The limit of other tiers is based on the runset. See the IC Validator Reference
Manual for more information about the error_sampling_per_check argument.
6
IC Validator Explorer LVS
Syntax Description
-explorer_lvs_checks [SHORTS Optional. Performs only text-based short checks or text-based short
| ALL] checks along with LVS compare.
Use SHORTS to perform only text-based short checks. If
-explorer_lvs_cells=AUTO (the default), all of the equivalence
cells that are not SOC Top cells are deleted or black-boxed. Set
-explorer_lvs_cells=ALL to check text-based shorts along with
LVS compare.
-explorer_lvs_preserve_cells Optional. Specifies the cells that are to be preserved from being
[preserve_cell_list] deleted or black-boxed in Explorer LVS so that those cells can be
included in Explorer LVS.
All possible usages of the Explorer LVS command-line options are shown in Figure 39.
Add all of the cells to the % icv -explorer lvs \ Text-based short checks on all
default -explorer_lvs_cells ALL \ of the cells with metal layers
runsetfile
Add all of the layers to the % icv -explorer lvs \ Text-based short checks on the
default -explorer_lvs_layers ALL \ SOC Top cells with all of the
runsetfile layers
Add all of the cells and % icv -explorer lvs \ Text-based short checks on all
layers to the default -explorer_lvs_cells ALL \ of the cells with all layers
-explorer_lvs_layers ALL \
runsetfile
Add LVS compare check % icv -explorer lvs \ Text-based short checks along
to the default -explorer_lvs_checks ALL \ with LVS compare on the SOC
runsetfile Top cells with metal layers only
Preserve CellA and CellB % icv -explorer lvs \ Same as the default behavior
in Explorer LVS -explorer_lvs_preserve_cells \ with the only difference
"CellA CellB" runsetfile being where Explorer LVS
guarantees CellA and CellB to
be preserved
7
PXL
This chapter describes PXL, a programmable and extensible language, as well as the
basics of function definitions and the use of functions in the IC Validator tool.
PXL Architecture
PXL is fully programmable, with general programmability features such as
• Variables
• Flow Control
• Lists (arrays) with standard indexing
• Structures
• Hashes
• Enumerated types
• User-defined functions
• Mathematical, logical, and string operations
• Macros with command-line option support
• Environment variables
• Static scope
These features allow for
• Straightforward runset development
• Easy to maintain runsets
• Succinct code
• Code reuse with subroutines
• Support user-defined functions
Figure 40 shows the basic structure of PXL.
General Definitions
This section provides definitions and characteristics of some syntactic elements in PXL,
such as identifiers, string literals, keywords operators, and comments. Understanding the
purpose and function of syntactic elements is helpful while creating runsets.
Identifiers
Identifiers for any PXL components can include alphabetic characters, numbers, and
underscores in any order.
PXL is case-sensitive. The identifiers listed here are all different:
• abCD
• ABCD
• abcd
another_identifier_using_underscores
layer20
_productname
(One leading underscore; reserved for product use)
HIERARCHICAL
(Reserved keyword)
String Literals
A string literal is a group of characters beginning and ending with a double quotation
mark (").
Note:
The first character of a text string cannot be an integer or a space followed by
an integer.
String literals must not contain a new line. If a new line is needed in a string
literal, the new-line character (\n) must be used.
The following groups of characters are examples of string literals:
"hello"
"\""
"abc" "def"
String literals are automatically merged with any adjacent string literals. In particular, they
concatenation of strings
are merged with preprocessor variables that contain string values, such as __FILE__, as
well as references to environment variables, such as $<id>.
Use escape sequences within strings to set aside parts of the string as instructions to the
compiler. The available set of escape sequences in PXL are shown in Table 20.
Table 20 Escape Sequences
Escape Definition
\n New-line character
\t Tab character
Numeric Constants
Numeric constants in PXL are defined as shown in Table 21.
Table 21 Numeric Constants
octal number Contains a leading zero (0) and any of the digits 0346
from zero (0) to seven (7).
Keywords
Keywords are built-in words that are recognized by the language. Keywords are reserved
and therefore cannot be used as identifiers.
A few of the uses of keywords are
• Flow controllers, such as if, foreach, and while
• Operators, such as in and contains
• Constants, such as true and false
The following tables list the reserved words in PXL.
reserved keywordskeywords
Note:
See Identifiers on page 153” for more information about naming restrictions.
Table 22 lists keywords that are reserved by the system and are unavailable for external
use.
Table 22 System Keywords
volatile
Table 23 lists keywords use by the In-Design flow. Do not use variables with these names
keywords, In-Design flowIn-Design flow, keywords
in an IC Validator runset.
Table 23 In-Design Flow Keywords
INCREMENTAL_MILKYWAY_OPTIONS_INCLUDE_VIEW INCREMENTAL_OPTIONS_SELECT_WINDOW
INDESIGN_DO_FLOATING_VIA_FILL_PURGE INDESIGN_DONT_UPDATE_TECHNOLOGY_FILE
INDESIGN_ENABLE_MY_ASSIGN METAL_FILL_BLOCKAGE_LAYERS
METAL_FILL_BLOCKAGE_VIEW METAL_FILL_DATATYPES
METAL_FILL_DENSITY_DEBUG METAL_FILL_DENSITY_CHECK
METAL_FILL_DENSITY_DELTA_WIN_SIZE METAL_FILL_DENSITY_MIN
METAL_FILL_FLATTEN METAL_FILL_INLIB
METAL_FILL_INLIB_CELL METAL_FILL_INLIB_EXCLUDE_FILL
METAL_FILL_INLIB_PATH METAL_FILL_INSTALL_ICV_RH
METAL_FILL_INSTALL_PRIMEYIELD_RH METAL_FILL_LAYER_NUMBERS
METAL_FILL_OUTLIB METAL_FILL_OUTLIB_APPLY_PREFIX
METAL_FILL_OUTLIB_CELL METAL_FILL_OUTLIB_CELL_PREFIX
METAL_FILL_OUTLIB_MODE METAL_FILL_OUTLIB_PATH
METAL_FILL_OUTLIB_TEMP_CELL METAL_FILL_OUTLIB_VIEW
METAL_FILL_OUTLIB_RF_CELL_PREFIX METAL_FILL_RUNSET2LIBRARY_LAYER_MAP
METAL_FILL_SELECT_WINDOW METAL_FILL_USER_RUNSET
_METHODOLOGY_FUNCTIONS_RS_ SAVE_METAL_FILL_INLIB_EXCLUDE_FILL
_SMF_ICV_RH_ SNPSINDESIGN
VIA_FILL_ALLOWED_ONE_SIDE VIA_FILL_ASSOCIATED_METAL
VIA_FILL_DATATYPES VIA_FILL_LAYER_NUMBERS
VIA_FILL_MASK_NAMES VIA_FILL_MET_ENCLOSURE
VIA_FILL_OUTLIB_RF_CELL_PREFIX _WRITE_AREFS
by const constraint
function hash if
in in_out integer
list newtype of
on out return
thru to true
void while
Enumerated type elements cannot be used as identifiers. Table 25 lists the enumerated
enumerated type elements
ARC_PLAN_CELL ASSIGNS
BISECTOR BLACK_BOX_DESIGN
BODY_EXTENTS BODY_POLYGONS
COLOR_SIDE COMBINE_COLOR
CONFLICT CONTACT
CREATED_PORTS CROSS
DENSITY_GRADIENT DENSITY_NORMAL
DENSITY_UNDER_1 DENSITY_UNDER_2
DESIGN DESIGN_VIEW
DIODE_DESIGN DISCARD
DONUT DOT
ENCLOSE END_CAP_DESIGN
EXTRACT FAIL
FAILED FAILED_INSERTION
FILL_BLOCKAGE_USE FILL_DESIGN
FILL_RINGS FILL_TRACK
FLIP_CHIP_DRIVER_DESIGN FLIP_CHIP_PAD_CELL
HORIZONTAL_TRACK_OBJECT HORIZONTAL_WIRE_TRACK
ICV_LCC_MAX
IMAGE_CELL IN INCLUSIVE
INSIDE INSIDE_TO_OUTSIDE
INSTANCE_NAME INSTANCES
INTERNAL_DESIGNS INTERSECTION
IO_PAD_CELL KEEP
LVS_USER_UNIT_METER LVS_USER_UNIT_MICRON
MERGED METAL_HEIGHT
NDM
NETS NEVER
NEW_AREF_ONLY NINETY
NOT_DEFINED_LAYER_TYPE
NOT_TEXTED NOTCH NP
OLD_AREF_ONLY ON ONE
PATH PATH_OBJECT
PLACEMENT_HARD_MACRO_BLOCKAG PLACEMENT_PARTIAL_BLOCKAGE
E
PLACEMENT_SOFT_BLOCKAGE PM_EXACT
PMOS PN PNP
PREFIX_NAME PRIMARY_AXIS
RAM_CELL REASSIGN
REASSIGNED_SHORTED RDL_USE
SHAPING_BLOCKAGE SHIELD_ROUTE
TRUNCATE TSV_CELL
UNCONNECTED UNDER_EXPOSURE
Operators
Operators are symbols that have a particular meaning. For example, + indicates that two
numbers are added. The operator types are shown in Table 26.
Table 26 Operator Types
Comments
Comments are text used to document the program. Comments are useful in indicating the
intent of the code to readers. Comments are treated as white space. The comment types,
shown in Table 27, are based on how many lines each comment contains.
Table 27 Comment Types
Block comments do not nest. If */ is seen within a comment, the preprocessor reports a
warning. For example, if the runset contains
/* - 1st comment start
/* - 2nd comment start
*/ - 2nd comment end
*/ - 1st comment end
The "1st comment start" matches with "2nd comment end"; the "1st comment start" does
not match with "1st comment end".
Here are some examples of comments:
/* this is a single-line block comment */
/*
* block comments can span
* several lines, such
* as this one does.
*/
/*
A leading * is not required on each line of a
block comment but does make the comment stand out
from the program.
*/
/*
* assuming environment variable my_value contains the value abcd,
* the following code evaluates to true; otherwise evaluates to false.
*/
res : boolean = ( $my_value == "abcd" );
/*
* if environment variable aaa contains the value
* /home/dir/path, the following code produces a string
* containing a path to the file "cases" in that directory.
*/
path : string = $aaa "/cases";
/*
* if environment variable bbb is defined but empty, the
* following code produces the string abcdef.
*/
mystr : string = "abc" $bbb "def";
Variables
Variables are used to store information temporarily in the program.
When declaring variables, the variable name must be followed by a colon (:) and the
variable type. Variables can also be given an initial value when declared. See the
examples in the following section.
Note:
See the Identifiers section for information about naming restrictions.
Initializing Variables
You can initialize variables to any of the supported data types in two ways:
• Using a literal value.
y : boolean = true; // initialize by literal value
• Using an expression.
z : boolean = !y; // initialized by expression
Scope of Variables
Scope in PXL is the meaning that variables and expressions have within several contexts
in a program. The various rules and conditions that relate to scope of variables are
described here by using examples.
Local scope is defined by a set of braces ({ }). That is, compound statements, defined with
curly braces, define a new variable scope. See Compound Statement on page 208 for
the use of braces.
Flow control statements create a new scope.
if ( var == 1) { // Start of new scope
my_var: integer = 1;
...
} // End of scope.
• Variables declared in a local scope do not exist outside of that scope. That is, if a
variable is declared inside of a particular scope, the variable cannot be used outside of
that scope. For example,
{
x = y + b;
}
z = x; // x is not defined; it only exists in the local scope
• Variables are global from outermost scope to innermost scope. For example,
{
x : integer = 1;
{
a = x + b; // x exists and can be seen within this scope
// (global from outer scope)
x = y + b; // global value of x is overwritten
}
z = x; // x is defined here and contains the value of y + b
}
z = x; // x is not defined here – parser error
• Variables declared in an outer scope that are redeclared in an inner scope are
shadowed to prevent value collisions.
◦ If variable shadowing occurs and the scope of the shadowed variable is not defined
by a function declaration, the program generates a warning.
◦ Within function calls, shadowing is expected and there is no warning. For example,
y: integer = 1;
x: integer = 1;
{
y : integer = 3; // The original y value is shadowed.
// This is a new y.
x = x + 1; // x is equal to 2 here
y = y + 1; // y is equal to 4 here
}
// y is equal to 1 here – the shadowed value
// returns when exiting the inner scope
// x is equal to 2 here
z = x + y;
};
Operators
Operators are symbols used within an expression to specify certain operations that are to
operators, definition
Assignment Operators
An assignment operator evaluates an expression and saves it to a target lvalue.
An lvalue is an expression that can contain a value and can be used on the left side of an
assignment statement.
Assignments involving the use of > or >= in constraint values must have white space
before and after the assignment operator. You can also place the constraint value within
parentheses following the assignment =.
For example,
abc = >=3;
xyz = (>5);
Note:
Assignment operators cannot cascade. This example is not legal:
a = b = c
Relational Operators
A relational expression evaluates an expression to a Boolean value, either true or false.
Table 28 defines the relational operators.
Table 28 Relational Operators
e1 == e2 true if e1 is equal to e2, false otherwise All completely defined data types
e1 != e2 true if e1 is not equal to e2, false otherwise All completely defined data types
Logical Operators
A logical expression evaluates to a Boolean value. A logical operator requires that all
arguments to be of boolean type. Expressions are evaluated from left to right, and stop
when the result is defined. Table 29 defines the logical operators.
Syntax Description
Conditional Operator
A conditional expression takes three arguments; the first argument must be of boolean
type. Table 30 defines the conditional operator.
Table 30 Conditional Operator
Syntax Description
Consider a conditional operator with three arguments e1, e2, and e3. The conditional
operator is evaluated in the following order:
1. The first argument (e1) is evaluated.
2. If the first argument (e1) evaluates to true, the second argument is evaluated.
3. If the first argument (e1) evaluates to false, the third argument (e3) is evaluated.
e1 | e2 Bitwise OR Integer
Note:
Any expression with a division operator (/) or exponentiation operator (**)
cannot be used as the index of a list or hash table.
To perform an integer operation, you must wrap the operation in a call of the dtoi()
function. This function converts a double into an integer. For example,
var : integer = dtoi(10/3);
Member Reference
A member reference expression accesses one member of a compound object. The
member reference takes the type of that single member. The member is an lvalue and
functions as the target of an assignment. Table 33 defines member reference expressions.
See Assignment Operators on page 175.” for more information about lvalue.
Table 33 Member Reference
Syntax Description
Operators Associativity
() [] . Left to right
* / % Left to right
+ - Left to right
== != Nonassociative
^ Left to right
Operators Associativity
| Left to right
|| Left to right
?: Right to left
= @= ?= Illegal
Associativity Definition
A + B + C
( A + B ) + C
A ** B ** C
A ** ( B ** C )
A * B + C >> 2
( ( A * B ) + C ) >> 2
A >= B == C
( A >= B ) == C
A < B == C > D
( A < B ) == ( C > D )
Variables can be implicitly typed based on the return type of the operator. For example,
a : double = 1.1203;
b : double = 2.0213;
c = a + b; // c is implicitly the type of "double"
Primitive Types
Table 36 shows the primitive data types supported by PXL.
Table 36 Primitive Types
handle An opaque data type, used to track data objects external to the PXL
program itself.
constraint of double and Concise representation of a set of contiguous double values or integer
constraint of integer values.
integer
Here are examples of integer declarations:
i : integer; // declared but not initialized
i : integer = 1; // declared and initialized
string
Defines a string of zero or more characters. The string type is initialized from a string
literal. For example,
s : string = "xyzzy";
The string type supports the operations shown in Table 37 on any expression of string
type.
Table 37 String Type Operations
handle
An internal representation for an externally managed object. The type handle has a string
representation that is unique for each externally managed object.
The constraint data type can only be created by assignment from another like-typed
constraint or from a literal constant, as shown in Table 39.
Table 39 Constraint Categories
3. For unary constraints, such as == and >, both e1.lo() and e1.hi() return the single limit.
Both lo() and hi() are valid for all constraints. For single-ended constraints, both
functions are defined to return the single endpoint, 'a'. For double-ended constraints, lo()
returns 'a' and hi() returns 'b'.
Examples
Here are some examples.
• Constraint of integer
[2,8] contains the values 2, 3, 4, 5, 6, 7, 8
• Constraint of double
[2,8] contains all real numbers in the range including 2.0 and 8.0
(2,8) contains all the real numbers in the range except 2.0 and 8.0
• Declaration example
a_value : constraint of double = [2.0,8.0];
• Assignment examples
distance <= 2.0 // All values less than or equal to 2.0
distance = <= 2.0 // All values less than or equal to 2.0
distance = [ 2.0, 8.0 ) // All values >= 2.0 and < 8.0
count = 5 // Count is exactly equal to 5
count = ==5 // Count is exactly equal to 5
Note:
The symbols such as == and <= are part of the constraint. The assignment
operator = is optional for variables expecting a constraint type. However, if a
variable of the constraint type is on the right side, the assignment operator is
necessary:
my_constraint: constraint of double = < 5.0;
external1(layer1, distance = my_constraint );
external1(layer1, distance <5); // same as above
external1(layer1, distance = < 5); // same as above
hash of... Converts a list of elements of one data type to another data type.
newtype...
A newtype declares a type of new variable. It does not define a variable. Each type is
unique.
The following example is of a valid type definition:
/*
* define a new type, direction.
*/
Here is an example of defining and using a new data type called direction. It has four
allowed values.
enum of...
Defines a set of names (enum literals) distinct from other types. The PXL coding standard
is for enum literals to be all uppercase characters.
For example,
t3: newtype enum of {
X,
Y,
Z, // Trailing comma is not needed; silently discarded.
};
An enum can share names with other enumeration types without error:
t1_e : newtype enum of {
NONE, TOP, BOTTOM, ALL
};
t2_e : newtype enum of {
NONE, LEFT, RIGHT, ALL
};
Cross-type operations are not allowed. Also, within one type definition, you cannot use the
same name more than one time.
This example shows a cross-type operation. This operation is not legal:
v1 : t1_e; // NONE, TOP, BOTTOM, ALL
v1 = NONE; // Legal assignment
v1 = TOP; // Legal assignment
v1 = RIGHT; // Illegal assignment
list of...
Lists are arrays of objects.
• All of the objects are of the same type. This type is allowed to be empty or to contain
duplicate objects.
• It can be processed using the foreach() looping construct.
• Lists can be addressed by an index number using brackets ([ ]).
See Nested Containers on page 195” for information about a list of lists.
This type supports the operations shown in Table 41.
Table 41 List Operations
e1.remove(e2) item void yes Removes all values in the e1 list that
are equal to the e2 expression.
if (MyPartition.contains(x)) { ... }
i : integer = MyPartition.size();
hash of...
Defines a mapping from one data type to another. The source type is called the key type;
the destination type is called the value type. That is, the hash defines an array of key/value
pairs.
This type requires that the key type be string, integer, or enum, or a type derived from
one of these three types. It cannot be a double. The value type can be of any previously
defined type.
The following example creates a hash that converts integers to strings. In this case,
the key type is integer and the value type is string. This type is initialized using the =>
operator.
t1_h : newtype hash of integer to string;
t2_h : newtype hash of string to integer;
h1 : t1_h = {
1 => "a",
2 => "b",
26 => "z"
};
h2 : t2_h = {
"xyzzy" => 1,
"plugh" => 10,
"help" => 100
};
The hash of... type supports the member operations shown in Table 42.
Table 42 Hash Operations
struct of...
Defines a distinct type identified by the user-supplied name. It must have at least one
member and member names must be unique. The members can be of different types.
Here is an example structure that defines a point by its geometric coordinates:
point: newtype struct of { x: integer; y: integer; };
The struct of... type can contain initial values that are used as defaults for structure
creation.
t3_s : newtype struct of {
x: integer;
y: integer = 76; // initializer
z: string = "hi"; // initializer
};
t4_s : newtype struct of {
a: integer = 106; // initializer
b: string = "boo"; // initializer
};
The struct of... type can be assigned from a C-style initializer or from a list of name-
value pairs. The following example shows assignment using the types defined in the
previous example.
s3 : t3_s = { 10, z = "bye" }; // note that 'y' defaults
// to the value 76
box : shoebox;
shoe = box.style; // shoe is "Tennis shoe", the default
box.style = "Dress Shoe"; // Reassign the member box.style
// create a new shoebox with a different kind of shoe
box2 : shoebox = {"Boat Shoe", 9};
function
Allows a function to be passed to another user function. The function that is passed can be
used as though it were a global function. For example,
// input file
#include <icv.rh>
xyzzy : function (void) returning void { note("abcd"); }
plugh : function ( f : function(void) returning void ) returning void {
f(); }
plugh(xyzzy); // summary file shows "abcd" in output when plugh runs
Nested Types
Types can be combined to form nested types, such as
• list of struct
• list of enum
For example, using the shoebox struct definition in the example in struct of... on
page 191:
box : list of shoebox = {
{ "Walking", 12 },
{ "High Heel", 7 }
};
A = box[0].style; // A is "Walking"
B = box[1].shoe_size; // B is 7
Type Analysis
This section describes how PXL handles data types in special cases, such as when one
data type is passed in an argument but another data type is expected.
In cases where PXL encounters one data type when another data type is expected, PXL
performs automatic type conversion whenever appropriate. The rules that are used to
perform automatic type conversion are discussed in the following sections.
While expecting ... Allow Allow constraint of Allow double Allow constraint of
integer integer double
4 5
integer Yes No No No
These conversions are only applied when required by the types of the expression. In
particular, integer-typed expressions are not converted to double unless required by the
context.
When converting an integer or double to a constraint, the resultant constraint appears as if
it had been written using the == unary constraint (CONSTRAINT_EQ).
f: function ( c: constraint ) returning void { ... }
Comparing Doubles
Because the type double is an approximation of the real numbers, there are occasionally
accumulated errors that cause apparently equivalent operations to produce different
results. PXL has special functions for comparing doubles while accounting for such small
errors. These special functions are described in the Equivalence Comparison Functions
and Double Comparison Functions of the Math Functions section in the “Utility Functions”
chapter of the IC Validator Reference Manual.
Note:
A warning is reported whenever there is a direct comparison (either == or !=)
involving one of the double-related types.
Type Promotion
When a list is expected and an expression of the same type as that contained by the list is
found, the expression is promoted from a single item to a list containing that item.
Parameter Passing
Function parameters that are qualified with either output or input-output must have
matching argument values. The rules previously described do not apply. For example,
when a function expects a constraint argument that is marked as out, providing the name
of an integer-typed variable is not legal because the conversion that is required is not
legal.
String Concatenation
Any expression of the types shown in Table 44 can be appended to a string expression
using the standard addition operator (+).
Table 44 String Concatenation
handle Textual representation of the handle; unique for each distinct object
c : constraint = ( 10, 40 ];
s = "" + c;
// s now contains the string "(10.,40.]", because the
// type of "c" is constraint of double
Default Injection
When processing a function call, parameters for which no value is given assume the
defaults, if any, from the function prototype. A parameter with no default or specified value
results in an error.
Similarly, when processing a literal structure, members for which no value is given assume
the defaults, if any, from the newtype structure definition. A member with no default or
specified value results in an error.
Nested Containers
Lists, hashes, and structures are also known as containers. Containers can be nested in
many different combinations. An example of a nested container is a list of structures.
Here is an example of a hash of a hash. The runset code is
hash ofa hash
foreach(key1 in multi_hash.keys() ){
foreach(key2 in multi_hash[key1].keys()){
note( "multi_hash[" + key1 + "][" + key2 + "] = "
+ multi_hash[key1][key2] + "\n");
}
}
The output is
runset.rs:123 multi_hash[A][ONE] = 1
runset.rs:123 multi_hash[A][TWO] = 2
runset.rs:123 multi_hash[B][FOUR] = 4
runset.rs:123 multi_hash[B][THREE] = 3
list2 : list2_l = {
{ 0, 1, 2, 3 },
{ 10, 11, 12, 13 }
};
The output is
runset.rs:143 list2[0][0] = 0
runset.rs:143 list2[0][1] = 1
runset.rs:143 list2[0][2] = 2
runset.rs:143 list2[0][3] = 3
runset.rs:143 list2[1][0] = 10
runset.rs:143 list2[1][1] = 11
runset.rs:143 list2[1][2] = 12
runset.rs:143 list2[1][3] = 13
runset.rs:143 list2[1][4] = 14
runset.rs:143 list2[2][0] = 20
runset.rs:143 list2[2][1] = 21
runset.rs:143 list2[2][2] = 22
runset.rs:143 list2[2][3] = 23
list3.push_back( { 0, 1, 2, 3 } );
list3.push_back( { 4, 5, 6 } );
list3.push_back( { 7, 8 } );
list3.push_back( { 9 } );
}
}
The output is
runset.rs:159 list3[0][0] = 0
runset.rs:159 list3[0][1] = 1
runset.rs:159 list3[0][2] = 2
runset.rs:159 list3[0][3] = 3
runset.rs:159 list3[1][0] = 4
runset.rs:159 list3[1][1] = 5
runset.rs:159 list3[1][2] = 6
runset.rs:159 list3[2][0] = 7
runset.rs:159 list3[2][1] = 8
runset.rs:159 list3[3][0] = 9
Flow Control
Flow can be controlled using
• Conditional Statements
• Loops
Conditional Statements
Conditional statements in PXL are constructs that allow you to perform different actions
depending on whether a specified condition evaluates to true or false.
The if statement is used in a program to perform an action only when a predefined
if statement
condition is fulfilled.
The if statement is used in conjunction with at least two other components:
• An expression that can be evaluated to a Boolean value, true or false.
• A component that is executed if the evaluated expression returns a true value.
Optionally, along with the required components, you can build a more complex conditional
statement by including the following components:
• An else construct followed by a component that is executed if the first expression is
evaluated to false.
• An elseif construct that indicates that a looped if condition or one or more if
conditions are nested within a parent if conditional statement. PXL also supports the
elif construct.
Loops
As in other programming languages, loops can be used in PXL to repeat an action a
number of times.
PXL provides the following methods to create looping constructs in programs:
• for Loop
• foreach Loop
• while Loop
for Loop
The for loop is used to repeat a particular action a specified number of times. It consists
of at least one integer (e1) and an action (s1) that must be repeated. It could also contain
additional integers (e2, e3).
It uses the syntax
for (var = e1 to e2) s1; //s1 is a statement; it can be compound
for (var = e1 to e2 step e3) s1;
For example,
for(i=0 to 10 step 2){ // Increment by 2: 0, 2, 4, 6, . . .
if( i < 6) {
a[i] = b[i] + c[i];
} elif( ( i >=6 ) && ( i < 10)){
a[i] = b[i] + c[i] + 1;
} else {
a[i] = b[i] - c[i];
}
};
foreach Loop
The foreach loop is similar to a for loop but it is used to perform a set of actions with
reference to the elements of a container, such as a list or an enum. It does not use a
variable counter to determine the number of iterations of the action.
It consists of a list of type (e1) and a component (s1) that is evaluated for each element in
e1.
It uses the syntax
foreach (var in e1) s1; //e1 is of the type "list of"
• s1 is evaluated one time for each element of e1, with var copied from the list.
• s1 is bypassed if the input list e1 is empty.
• var is a constant variable of the correct type visible only within statement s1.
while Loop
The while loop is used to repeat a particular action until a condition is met. It consists of a
Boolean expression (e1) and an action (s1) that must be repeated. The while loop always
evaluates the Boolean expression e1 at least one time.
It uses the syntax
while (e1) s1; // e1 is evaluated as a Boolean;
// loop continues until e1 is false
Functions
Functions in PXL are categorized based on their usage. The function types are described
in Table 45.
Table 45 Function Types
Remote function Function written by the user that can be called by runset functions. There
are also default remote functions supplied by the IC Validator tool.
Utility function Function supplied by the IC Validator tool that provides access to primitive
data in a remote function.
Function Definitions
A function definition defines a set of operations associated with a name. A function
definition identifies the names, types, and defaults for all parameters and for a returning
value.
You can create your own functions and use them along with the provided functions within
your runsets.
As is shown in the following example, all function definitions must be at the top level.
// a simple function definition
Function definitions provide a separate scope for all processing. Scoping is determined in
scoping
the context of where things are written (static scoping). For information about scope see
Scope of Variables on page 173.
Function Prototypes
A function prototype defines the parameters to a function and defaults for optional
parameters. Function prototypes identify call-by-reference parameters. Function
prototypes do not define the operation of the function.
The following example is of a function prototype:
my_xor : function (
a : polygon_layer,
b : polygon_layer
) returning xorOut: polygon_layer;
The following example shows calling the my_xor() function, which was defined in the
preceding example.
a1 : polygon_layer;
b1 : polygon_layer;
xorA1B1 = my_xor(a1, b1);
Function Calls
Function calls define how functions operate. A function call consists of the name of the
function followed by instructions on how and where the function must be executed through
arguments.
Values can be passed to the arguments in a function call in two different styles:
• By argument value alone (pass-by-value). The order of values must match the order of
arguments in the function prototype. When you use this style, you must specify values
for all optional arguments preceding an optional argument that is of interest to you
because values are matched to arguments based on order.
• By name in argument_name = argument_value format (pass-by-name). The order
of arguments is irrelevant. When you use this style, you do not need to specify values
for all optional arguments preceding the optional argument that is of interest to you
because values are matched to arguments based on name.
Both styles can be used in a single function call. However, in a function call, all argument
values specified with the pass-by-value style must precede all argument values specified
with the pass-by-name style. After you use the pass-by-name style in a function call, all
following argument calls must be in the same style.
For example, the syntax of the interacting() function is:
interacting : binary published function(
layer1 : polygon_layer,
layer2 : polygon_layer,
count : count_constraint_t = >0,
include_touch : more_include_touch_e = EDGE,
processing_mode : processing_mode_e = HIERARCHICAL
) returning result : polygon_layer;
The following is an example of calling the interacting() function with the pass-by-name
style:
y = interacting( layer1 = a, layer2 = b, include_touch = ALL );
// The count argument is skipped, and the value is the default.
// The value of the processing_mode argument is the default.
The following is an example of calling the interacting() function with the pass-by-value
style:
y = interacting( a, b, 2 );
// The value of count is 2.
// The values of the include_touch and processing_mode arguments
// are the defaults.
Here are two examples of calling the interacting() function with mixed styles:
y = interacting( a, b, include_touch = ALL );
// The count argument is skipped, and the value is the default.
// The value of the processing_mode argument is the default.
If defaults are present for any parameters, the parameters are not required to be specified.
In the following example, the function has three parameters, two of which have defaults.
f: function
( x : integer,
y : integer = 27,
z : integer = 42
) returning void { ... }
...
With careful interface definition, binary allows for easy-to-read programs. For example,
// A simple declaration that takes two arguments.
// The LHS is the layer to be selected from; the RHS is the
// constraint defining the criteria area.
...
// ... as is a variable of appropriate type.
c : constraint = (27.0, 42.5);
x : lyr1 area c;
Argument Bindings
Argument bindings determine which arguments at a call site are bound to which
parameters in the function prototype.
• Select candidate prototypes based on the name of the function and the type required
by the context. If the type cannot be determined, use only the function name. When an
operator-style function call is made, the external arguments are bound to parameters
in the prototype. For binary functions, the left argument is bound to the first parameter,
and the right argument is bound to the second parameter.
• Each name-value argument is bound to the remaining like-named parameter.
• Any arguments that do not have names are bound to the remaining undefaulted
parameters from left to right.
• If all remaining unbound parameters have defaults, the function is used.
• It is an error if argument binding fails to match exactly one function prototype.
For example, given the function MyFunc defined, as follows:
MyFunc: binary function (
fromLayer: lhs layer,
touching: rhs layer,
corners: boolean = true,
whole_polygon: boolean
) returning e: edge_layer;
The following code produces a compile-time error because the whole_polygon argument
does not have a default:
/* illegal function call */
Tbroken = metalLayer MyFunc selectLayer;
Defaults
Defaults are specified as right side assignments to option variables in the function
prototypes. For example, here is the syntax of the and() function:
and : function(
layer1 : polygon_layer,
layer2 : polygon_layer,
processing_mode : processing_mode_e = HIERARCHICAL,
The following call of the and() function uses the defaults for the processing_mode and
remove_hierarchical_overlap arguments.
c = and( a, b );
The following call of the and() function uses an optional setting for the processing_mode
argument and the default for the remove_hierarchical_overlap argument.
c = and( a, b, CELL_LEVEL );
Resetting Defaults
You can override the default of any function argument. Any argument can have its default
changed; it can be numerical, a Boolean value, or an enumerator.
In this example, the default for the argument x of the abc function is set:
abc : literal function ( x : double = 123) returning z : double;
A = abc(); // == abc(123)
Caution:
When you reset defaults, you are redefining the function, and all of its
arguments must be listed.
Anonymous Functions
Several IC Validator commands allow a user function to be passed in as an argument.
Anonymous functions provide a mechanism for simple behavior or as part of organizing
more complex behavior. You can use anonymous functions where function names are
expected. The compiler generates an internal name and declares the function at the
location in the code immediately preceding its invocation.
Traditionally, global variables can be used and set to different values for different calls
depending on the need. For example:
x : integer;
y : integer;
calc_nmos_props : function (void) returning void { … }
x = 0;
y = 5;
nmos(…, calc_nmos_props, …);
x = 1;
y = 4;
nmos(…, calc_nmos_props, …);
However, anonymous functions allow this code to be more compact and well organized.
In the following example, a hidden function is declared just before each of the NMOS calls
which then use it.
calc_nmos_props : function (x : integer, y : integer) returning void
{ … }
Function Overloading
PXL allows you to have more than one function with the same name in some cases.
Multiple functions can have the same name if PXL can determine which function is to
be called by using the types and parameter names. If PXL is not able to determine the
function to be called, it results in an error.
You can have two different functions with the same name if one returns a value and the
other returns a void.
When a function appears at the statement level, it is assumed to be the void-returning
function; when it appears in an expression, it is assumed to be the value-returning
function:
MyFunction(a,b,c); // use the void-returning function
// "MyFunction"
x = MyFunction(a,b,c); // use the value-returning function
// "MyFunction"
Note:
PXL only supports function overloading based upon different return types.
Function Recursion
Recursive function calling is not supported in PXL. For example:
noCanDo : function( … ) returning integer {
…
x = noCanDo(…);
…
}
Program Structure
A PXL program is a sequence of declarations, function definitions, and statements. A
program is executed from top to bottom.
PXL sets these preconditions for a program to be valid:
• All objects must be defined before they are used.
• All statements are evaluated only for side effects. An expression must be part of an
assignment.
10 + 5; // illegal because the result (15) is not captured
Compound Statement
A compound statement defines a nested scope in which local variables can be defined.
Compound statements can be used anywhere a single statement is allowed. For
information about scope, see Scope of Variables on page 173.
• Compound statements allow local variables to hide previously defined variables
at other scoping levels. The compiler reports a warning. Hiding is also known as
shadowing.
• Compound statements use braces ({ }) to identify the compound statement.
{ ; } // Smallest legal compound statement
Violations
Some IC Validator functions can produce violation data, which is also referred to
as error output, instead of or in addition to layer output. These violations are stored
in an SQLite database (the error database, PYDB), which is available to VUE for
debugging, and written to the LAYOUT_ERRORS file at the end of the run. There are
also some PXL functions that take violation data as input, such as the write_gds() and
violation_empty() functions.
If you want to directly access the error data produced by the IC Validator tool, see
PYDB Perl API
Appendix E, PYDB Perl API for information about using the IC Validator Perl application
programming interface (API).
Violation Comments
All violation output is associated with a comment string. This comment appears as a
heading in the LAYOUT_ERRORS file as well as in VUE. Any number of functions can
output to the same violation comment.
The violation comment for current and nested PXL scopes is specified by using the @
operator. If no comment is specified in the runset, violations are given a default comment
of "Violation".
The following example shows how to program violation comments that are unique for each
layer in a set of layers. The violations are named “Metal1 spacing <= 0.1um”, “Metal2
spacing <= 0.1um”, and so forth for each layer.
for (i=0 to metal_layers.size()-1) {
@ "Metal" + (i+1) + " spacing <= 0.1um";
external1(metal_layers[i], distance<=0.1);
}
The following example shows the behavior of violation comments with respect to scope.
Note:
Typically, the violation comment definition (@ operator) should not be separated
from the functions to which it applies. The following example highlights the
behavior of violation comments with respect to scope.
@ "First Error Message";
{ @ "Second Error Message"; // A new scope is created here
local_var = abc(layer2);
internal1(local_var, distance < 0.2); // Errors from internal1()
// are stored in the database
// under "Second Error Message"
} // The original scope is restored here
The following example shows how to program violation comments that are unique for each
layer in a set of layers. The violations are named “Metal1 spacing <= 0.1um”, “Metal2
spacing <= 0.1um”, and so forth for each layer.
for (i=0 to metal_layers.size()-1) {
@ "Metal" + (i+1) + " spacing <= 0.1um";
external1(metal_layers[i], distance<=0.1);
}
cdb1 = text_net(cdb1,
{ { M1, M1_text } },
shorted_violation_comment = "Rule 4.5.6: No shorts");
The IC Validator tool processes violation comments using command-line options such as,
-svc. For example:
{@ "A";
{@ "B";
internal1(…);
};
};
The tool considers only the lower violation comment for selection. In this case, setting
either -svc B or -uvc A executes the internal1() function, however, setting -svc A or
-uvc B does not.
Violation-Producing Functions
Most DRC and data generation functions are overloaded; that is, they have two versions.
The version that returns a polygon layer does not produce error output. The void-returning
version produces error output but not a polygon layer. Here are examples of the two
versions of the external1() function.
// This external1 outputs to a polygon layer, no errors are produced
t1 = external1(layer1, distance < 0.1);
There are also some PXL functions that do not return void, but can also produce error
output. For example, the text_net() function returns a connect database but also
produces error output for text shorts, opens, and unused text.
Violation Variables
The built-in PXL type violation can be used for variables to hold error data if there is a
need to use it as input to another PXL function later in the runset. For example, violation
variables can be used as input to functions such as write_gds() to output error polygons,
or violation_empty() to determine if the violation contains any errors.
As the following example shows, some PXL functions return violations directly:
// These functions return violations for which there is not an associated
// PXL function
v1 = get_layout_grid_violation();
v2 = get_layout_drawn_violation();
Note:
Error output can be restricted by arguments such as error_limit_per_check
in the error_options() function. Those options can influence how much data
is actually stored in the error database. Discarded errors are not available for
write_gds() and other functions.
Violation Blocks
Violation variables can also be assigned by using violation blocks. Violation blocks are
specified by using the @= operator. The left side is a violation variable. A code block
enclosed by curly braces follows the @= operator. Within this block, all functions which
produce error output are associated with the violation variable. For example,
// v1 contains the error output from the external1() check
v1 @= {
@ "Rule 1";
l1 = a and b not c;
external1(l1, distance < 2);
}
Execution Model
The execution model for PXL demands that resources be available. Any operation in a
particular program is available for execution at any time if its preconditions are satisfied.
There is a strong bias to execute diagnostics early, meaning that all the diagnostics that
are ready are executed immediately during the initial processing of the program if possible.
Therefore, you might see out-of-order execution of statements in a PXL program. The
final results are unaffected by this reordering; however, intermediate results can appear
at different times or simultaneously. This out-of-order execution is also known as parallel
execution.
The same program, when executed different times, always generates the same results.
These results can appear in a different order, however, based on availability of resources.
Possible runtime errors are
• Attempting to use any variable that has not been assigned a value.
• Indexing beyond the end of a list or with a negative index.
• Reading from a hash with a key value for which there is no element.
Preprocessor
PXL uses a preprocessor to make the development and execution of programs easier
and faster. A standard preprocessor module is integrated into the PXL compiler. Table 46
shows the preprocessor capabilities.
Note:
The line length limit for the preprocessor is 64K characters. Exceeding this
limit can usually be worked around by inserting a carriage return in the place of
white space or a delimiter, such as a comma (,).
Standard macro support #define . . . #undef, including token construction (macro body #
#define#define[define]#undef#undef[undef]
Use preprocessor directives to remove large blocks of code; that is, to enclose the section
within a pair of #if 0 and #endif statements. For example,
#if 0
// everything through the matching #endif is treated as a comment.
if (something == true)
{
a line comment.
do_something(useful = true);
}
#endif
Here is an example of using directives to enforce one time inclusion. The first time this file
is encountered MYFILE_RH is not defined. The included #define MYFILE_RH sets this
variable to be defined and the code body is included. The code body is never included
again because MYFILE_RH exists and the #ifndef evaluates to false.
#ifndef MYFILE_RH
#define MYFILE_RH
Predefined Identifiers
During preprocessing, the predefined identifiers shown in Table 47 are supported.
Identifier Content
For example, __PXL_DATE__ and __PXL_TIME__ can be used to make code that
changes behavior based on the current date. Because these variables are preprocessor
variables, they can be used in #if and #define statements as well.
/**
* define a new token, expire_20061231, that is be replaced
* by the token deprecated before 2006.12.31 but expands
* to obsolete thereafter.
*/
#if __PXL_DATE__ < 20061231
#define expire_20061231 deprecated
#endif
#ifndef expire_20061231
#define expire_20061231 obsolete
#endif
#if VERSION_GE(2009, 6, 1, 0)
. . . // code that uses new features introduced in 2009.06.SP01
#else
. . . // code that uses older method (pre 2006.06.SP01)
#endif
Macro Definition
VERSION_GE(year, month, Tests to see if the current release version is greater than or
service_pack, patch) equal to the specified release version.
VERSION_LE(year, month, Tests to see if the current release version is less than or equal
service_pack, patch) to the specified release version.
VERSION_GT(year, month, Tests to see if the current release version is greater than the
service_pack, patch) specified release version.
VERSION_LT(year, month, Tests to see if the current release version is less than the
service_pack, patch) specified release version.
VERSION_EQ(year, month, Tests to see if the current release version is equal to the
service_pack, patch) specified release version.
Macro Definition
DATE_EQ(year, month, day) Tests to see if the current date is equal to the date specified by
the year, month, and day integers.
DATE_GE(year, month, day) Tests to see if the current date is greater than or equal to the
date specified by the year, month, and day integers.
DATE_GT(year, month, day) Tests to see if the current date is greater than the date specified
by the year, month, and day integers.
DATE_LE(year, month, day) Tests to see if the current date is less than or equal to the date
specified by the year, month, and day integers.
DATE_LT(year, month, day) Tests to see if the current date is less than the date specified by
the year, month, and day integers.
If POLY is empty,
DEBUG: Empty POLY test
Command-line definition:
icv -D layer_empty\(x\)=true runset.rs
Command-line definition:
icv -D layer_empty\(x\)=false runset.rs
See Flow Control on page 197 and the -C command-line options in the Command-Line
Options on page 23 for more information.
Qualifiers
A qualifier affects an item it marks by changing its behavior or the operations that are
allowed on it. Table 50 defines the qualifiers.
Table 50 Qualifiers
Qualifier Definition
constconst, qualifier The object causes an error each time it is assigned a new value. Initialization
does not cause this error. Only attempts at direct assignment or passing through a
function parameter qualified with out or in_out.
Qualifier Definition
deprecated deprecated, qualifier The object causes a warning each time that it is defined or referenced. A function
parameter being assigned an initial value in the function prototype or being used
within the body of the function of which it is a parameter does not generate this
warning.
in_out in_out, qualifier The function parameter requires an assignable item (an lvalue) in each function
call. A nonassignable item or an item of the wrong type causes an error. The
value of that item is also changed as a result of the function call. See Assignment
Operators on page 175 for more information about lvalue.
inline inline qualifierdefinition Tags a function as requiring expansion in an IC Validator run using the -ndg
command-line option. See Command-Line Options on page 23.
literal literal, qualifier The function parameter requires that the function argument be a literal expression.
That is, the compiler must be able to resolve the value of the function argument at
parse time.
obsolete obsolete, qualifier The item causes an error each time it is defined or referenced. A function
parameter being assigned an initial value in the function prototype does not
generate this error.
out out, qualifier The function parameter requires an assignable item (an lvalue) in each function
call. A nonassignable item or an item of the wrong type causes an error. This
parameter is assumed to have no value when the function body is processed. The
value of that item is also changed as a result of the function call. See Assignment
Operators on page 175 for more information about lvalue.
A qualifier can be used to qualify only relevant items. The qualifiers are shown in Table 51.
Table 51 Qualifier Items
#include <icv.rh>
// Function prototypes
fx1 : function (
lyr : polygon_layer,
w : double
) returning out: polygon_layer;
fx2 : function (
lyr : polygon_layer,
d : double
) returning out: polygon_layer;
#include <functions.rs>
#endif
• function.rs
// Function definitions
fx1 : function (
lyr : polygon_layer,
w : double
) returning out: polygon_layer {
// code body
out = f( lyr, w);
};
fx2 : function (
lyr : polygon_layer,
d : double
) returning out: polygon_layer {
// code body
out = f( lyr, d);
};
• main.rs
#include <icv.rh>
#include <function.rh>
y = fx1( a, 0.1);
z = fx2( b, 0.3);
8
Dynamic-Link Library Support
This chapter explains using the dynamic-link feature of the IC Validator tool.
Use the dynamic-link feature specifically for device extraction. This feature allows you to
pass measurement data to C libraries that are external to the IC Validator tool, and then
return the results from the C libraries back to the IC Validator tool. The C libraries must be
shared (dynamic) libraries.
Dynamic-link library support is described in the following sections:
• Dynamic-Link Functions
• Using the Dynamic-Link Functions
Dynamic-Link Functions
The following IC Validator functions support the dynamic-link library feature:
• dev_dlink_library_close() function
• dev_dlink_library_open() function
nmos(
property_function=my_mos_func,
dlink_libraries = {dlink_handle}
);
dev_dlink_library_close(dlink_handle);
2. Define the calculations for the device measurement data using the utility functions. The
dynamic-link remote function is typically organized like this:
my_mos_func : function(void) returning void {
//(1) Get measurement data for the device using the
// device utility functions
//(2) Encode the measurement data into a 1-dimensional array
//(3) Retrieve dynamic-link library file handle using the
// dev_dlink_library() utility function
//(4) Call user-defined dynamic-link function using the dev_dlink()
// utility function, and pass essential information to the
// external shared libraries
IC Validator data
}
3. The user-defined wrapper function, written in the C language, has this typical flow:
customer_wrapper_function() {
//(1) Decode the measurement data into customers' format
//(2) Decide function pointer according to string parameter
//(3) Call internal function
//(4) Return computed results
}
9
Unified Fill
Using the functions and PXL, the programming language of the IC Validator tool, you can
create both simple and complex procedures to meet your fill requirements. This chapter
describes the basics of how to use the unified_fill() function, an application-oriented,
fill insertion function. This chapter also shows several usage models for runsets using the
unified fill feature.
Note:
The Unified Fill function is described in the IC Validator Reference Manual.
A fill pattern defined in the unified_fill() function can have either a single fill layer or
multiple fill layers. That is, the function can output more than one polygon layer. Each fill-
layer definition can have a single rectangle or any number of nonrectangular polygons.
Additionally, every pattern can be viewed as a structure that is repeated inside the target
region as defined by the spacing constraints.
The unified fill feature is described in the following sections:
• Overview
• Types of Fill Patterns
• Fill-to-Fill Spacing
• Target Region and Fill-to-Signal Spacing
• Range and Width Dependent Spacing
• Polygon Grouping
• Signal Aligned Fills
• Improving Pattern Insertion
• Pitch Aligned Fills
• Criteria Analysis
• Output Layers
• Layer Compression
Overview
The unified_fill() function provides a high level of automation and flexibility in
developing a fill procedure. The fill requirement might be as simple as generating an array
of rectangular patterns inside a layer, or as complex as filling a design with a variety of
patterns to meet a certain density target.
The unified_fill() function supports many options that you can configure in different
ways for the desired fill requirement. Some of the high level features of this function are
• Ability to define different types of patterns
• Ability to define fill insertion sequences that use fill cells read from an external
database
• Equation-based criteria analysis
• Layer coloring
• Signal-aligned fills
• Automated target layer creation
Table 52 defines the terms used for the unified fill feature.
Table 52 Unified Fill Terminology
Term Definition
Fill pattern and Fill configuration composed of one or more fill layers. A
Fill structure pattern can be defined as either
• A single fill layer
• Multiple fill layers
Term Definition
Fill-to-signal spacing Spacing from the fill pattern, or from the fill layer, to the
design layer
Figure 41 summarizes the features that are available for each type of fill pattern.
fill_patterns = {{
type = UF_POLYGON,
polygon_fill = {
layers = {SHAPE_M1_fill}
}
}}
SHAPE_M1_fill : polygon_layer_s = {
layer_spec = {output_layer_key = "M1_fill"}
polygons = DATA_M1_fill,
};
fill_patterns = {{
type = UF_POLYGON,
polygon_fill = {
layers = {SHAPE_M1_fill}
}
}}
SHAPE_M1_fill : polygon_layer_s = {
layer_spec = {output_layer_key = "M1_fill"}
polygons = DATA_ M1_fill,
};
SHAPE_M2_fill : polygon_layer_s = {
layer_spec = {output_layer_key = "M2_fill"}
polygons = DATA_ M2_fill,
};
fill_patterns = {{
type = UF_POLYGON,
polygon_fill = {
layers = {SHAPE_M1_fill,
SHAPE_M2_fill}
}
}}
SHAPE_M1_fill : polygon_layer_s = {
layer_spec = {output_layer_key = "M1_fill"}
polygons = DATA_ M1_fill,
};
SHAPE_M2_fill : polygon_layer_s = {
layer_spec = {output_layer_key = "M2_fill"}
polygons = DATA_ M2_fill,
};
fill_patterns = {
{type = UF_POLYGON,
polygon_fill = {
layers = {SHAPE_M1_fill,
SHAPE_M2_fill}
}
},
{type = UF_POLYGON,
polygon_fill = {
layers = {SHAPE_M1_fill}
}
}
}
Example 6 Adjustable Rectangle Fill Pattern Example Using a Fill Pattern of Type
UF_ADJUSTABLE
fill_patterns = {{
type = UF_ADJUSTABLE,
adjustable_fill = {
layer_spec = {output_layer_key = "M1_Fill"}
width = 1.0,
height = 1.0,
width_bound = 4.0,
width_delta = 0.05,
}
}}
Example 7 Stack Fill Pattern Example Using stack_fill Option of unified_fill() Function
SHAPE_M1_fill : stack_layer_s = {
layer_spec = {output_layer_key = "M1_fill"}
polygons = DATA_M1_fill,
};
SHAPE_M2_fill : stack_layer_s = {
layer_spec = {output_layer_key = "M2_fill"}
polygons = DATA_M2_fill,
};
SHAPE_M3_fill : stack_layer_s = {
layer_spec = {output_layer_key = "M3_fill"}
polygons = DATA_M3_fill,
};
SHAPE_V1_fill : stack_layer_s = {
fill_patterns = {{
type = UF_STACK,
stack_fill = {
layers = {SHAPE_M1_fill,
SHAPE_V1_fill,
SHAPE_M2_fill,
SHAPE_V2_fill,
SHAPE_M3_fill},
min_stack = 1,
max_stack = 5
}
}}
SHAPE_V1_fill,
SHAPE_M2_fill}
}
},
{type = UF_POLYGON,
polygon_fill = { // Third pattern for insertion
layers = {SHAPE_M2_fill,
SHAPE_V2_fill,
SHAPE_M3_fill}
}
},
{type = UF_POLYGON,
polygon_fill = { // Fourth pattern for insertion
layers = {SHAPE_M1_fill}
}
},
{type = UF_POLYGON,
polygon_fill = // Fifth pattern for insertion
{layers = {SHAPE_M2_fill}
}
},
{type = UF_POLYGON,
polygon_fill = // Sixth pattern for insertion
{layers = {SHAPE_M3_fill}
}
}
}
The order of the individual layer definitions determines the order of pattern variations that
are placed. If the layers are defined as shown in Example 9.
M2
M1
The number of variations of a pattern that can be constructed by the tool depends on the
values of the min_stack and max_stack options.
• The min_stack value specifies the minimum number of fill layers required in any
variation.
• The max_stack value specifies the maximum number of fill layers allowed in a
variation.
Figure 46 shows an example of all pattern variations for a fill application in which the
pattern definition consists of 5 metals and 4 vias as the fill layers. In this example,
max_stack = 9, min_stack = 5
Depending on the layer definitions in the pattern, the bounding box of the pattern might
be larger than the individual bounding box of each layer definition. In such cases, the
bounding box of the smaller variations of the pattern might be different from that of the
complete pattern definition, which consists of all the layers.
By default, the unified_fill() function retains the bounding box of the complete pattern
for all of its variations. All automatic variations of the pattern by default align with the
bounding box of the complete pattern, as shown in Figure 46.
Example 10 and Figure 48 show an example where the full pattern is a structure with
two fill layers, M1_fill and M2_fill, which are shown in Figure 47. There are two variations
of this pattern, a pattern with only M1_fill and a pattern with only M2_fill. However, all
patterns have the same bounding box as that of the full pattern. As a result, it might
appear that the individual rectangles on M1_fill are spaced farther apart than the specified
spacing value. This is because the spacing is based on the bounding box.
Example 10 Pattern Structure With Two Fill Layers Using stack_fill Option of unified_fill()
Function
fill_patterns = {{
type = UF_STACK,
stack_fill = {
pattern_spec = {space_x = 0.1,
space_y = 0.1},
layers = {SHAPE_M1_fill,
SHAPE_M2_fill},
min_stack = 1,
max_stack = 2
}
}}
By default, the unified_fill() function retains the bounding box of the complete pattern
for all of its variations.
You can change this alignment behavior by setting the separate_patterns option to
true. In that case, every pattern variation has its own bounding box based on the extents
of the subset of layers in it. When inserting a pattern variation, the bounding box of that
specific pattern variation is used instead of the bounding box of the complete pattern.
In Figure 49, the two variations of the full pattern have their own bounding box. As
a result, they appear to be placed closer when compared to the behavior when the
separate_patterns option is false.
Figure 49 Pattern Structure With Two Fill Layers With separate_patterns Option True
STACK_metal_fill : cell_fill_s =
{
cell = Fill_Cell
layers = { // Map Individual layers of the cell to Fill layers
{ldt_list = {{1, 0}},layer_spec = {output_layer_key = "M1_fill"}},
{ldt_list = {{2, 0}},layer_spec = {
output_layer_key = "V1_fill"},
required_layer_keys = {"M1", "M2"}},
{ldt_list = {{3, 0}},layer_spec = {output_layer_key = "M2_fill"}},
{ldt_list = {{4, 0}},layer_spec = {
output_layer_key = "V2_fill"},
required_layer_keys = {"M2", "M3"}},
{ldt_list = {{5, 0}},layer_spec = {output_layer_key = "M3_fill"}}
},
min_stack = 1,
max_stack = 5,
rotation = NONE,
reflection = false
};
Note:
Depending on your requirements, the UF_STACK type provides more automation
in developing a fill procedure over the UF_POLYGON type. Similarly, the UF_CELL
type provides more automation over the UF_STACK and UF_POLYGON types.
Figure 50 shows an example usage model for the UF_CELL type.
Figure 52 shows the expansion of a fill cell. The top half of the figure shows the base cell
being inserted before the expansion. The bottom half of the figure shows the base cells
that were expanded by repeating the middle unit.
};
EXPAND_CELL_RIGHT_LAYER_2 : polygon_layer_s = {
layer_spec = {output_layer_key = "OUTPUT_EXPAND_CELL_RIGHT_LAYER_2"},
polygons = {{ {0.045, 0.128}, {0.191, 0.128}, {0.191, 0.432}, {0.155, 0.232
} }}
};
// Declare the actual three incremental unit cells that make up the base cell
cell_A_left : base_cell_s = {
layers = {EXPAND_CELL_LEFT_LAYER_1, EXPAND_CELL_LEFT_LAYER_2 },
repeatable = false
};
cell_B_middle : base_cell_s = {
layers = { EXPAND_CELL_MIDDLE_LAYER_1, EXPAND_CELL_MIDDLE_LAYER_2 },
repeatable = true,
maximum = 10
};
cell_C_right : base_cell_s = {
layers = { EXPAND_CELL_RIGHT_LAYER_1, EXPAND_CELL_RIGHT_LAYER_2 },
repeatable = false
};
// Call unified_fill()
expand_fill_cell_output = unified_fill(
fill_patterns = {
{type = UF_EXPANDABLE,
expandable_polygon_fill = {
pattern_spec = {
space_x = 0.12,
space_y = 0.06,
stagger_x = 0.2 },
base_cell = {cell_A_left, cell_B_middle, cell_C_right} }
}
},
fill_boundary = { type = LAYER, layer = REGION_CELL2_CORE }
);
// Retrieve the output layers
expand_fill_cell_layer_1 =
expand_fill_cell_output ["OUTPUT_EXPAND_CELL_LEFT_LAYER_1"][0]
or expand_fill_cell_output ["OUTPUT_EXPAND_CELL_MIDDLE_LAYER_1"][0]
or expand_fill_cell_output ["OUTPUT_EXPAND_CELL_RIGHT_LAYER_1"][0]
expand_fill_cell_layer_2 =
expand_fill_cell_output ["OUTPUT_EXPAND_CELL_LEFT_LAYER_2"][0]
or expand_fill_cell_output ["OUTPUT_EXPAND_CELL_MIDDLE_LAYER_2"][0]
or expand_fill_cell_output ["OUTPUT_EXPAND_CELL_RIGHT_LAYER_2"][0]
fill_boundary = {
type = LAYER,
layer = M1
},
extents_output = {
{output_layer_key = "OUTPUT_DM_EXTENTS"}
}
);
Fill-to-Fill Spacing
As described earlier in this chapter, you can configure the Unified Fill function to insert
different types of patterns into specified given target regions. By providing a list of pattern
definitions, you instruct the tool to fill the target region by repeating each pattern definition
in sequential steps.
Note:
These values can be negative in certain conditions to support overlap of pattern
placements.
Additionally, for the spacing between fill polygons and patterns that are placed inside
different target regions, you can set the spacing between patterns by using the
pattern_spacing option inside the pattern_spec option, or you can set the spacing
between individual fill layers by using the fill_to_fill_spacing option inside the
layer_spec option. The options are:
• allowed_spacing_x
• allowed_spacing_y
• min_space_corner
• extension
See the Unified Fill function for descriptions of these options. Use these options to
trigger more complex layer-specific spacing checks during fill treatment. The space_x,
space_y, stagger_x, and stagger_y options apply to the bounding box of the entire
pattern definition in accordance with the separate_patterns option.
By design, the unified_fill() function makes sure that no two patterns get closer
than the space_x and space_y values. In Example 14, space_x and space_y values are
specified for the multilayer pattern definition. If patterns falling in adjacent polygons of
the target region come too close, one of them is not inserted to meet the fill-to-fill spacing
constraint, as shown in Figure 55.
Example 14 Spacing Between Same Patterns Using the space_x and space_y Options
SHAPE_M1_fill : polygon_layer_s = {
layer_spec = {output_layer_key = "M1_fill"}
polygons = DATA_ M1_fill,
};
SHAPE_M2_fill : polygon_layer_s = {
layer_spec = {output_layer_key = "M2_fill"}
polygons = DATA_ M2_fill,
};
fill_patterns = {{
type = UF_POLYGON,
polygon_fill = {
pattern_spec = {space_x = 0.1,
space_y = 0.1},
layers = {SHAPE_M1_fill,
SHAPE_M2_fill}
}
}}
Example 15 Spacing Between Multilayer Patterns With Allowed Spacing Values Smaller Than
Space Values
fill_patterns = {{
type = UF_POLYGON,
polygon_fill = {
pattern_spec = {space_x = 0.1, space_y = 0.1,
pattern_spacing = {
allowed_spacing_x = {>= 0.02},
allowed_spacing_y = {>= 0.02}
}
},
layers = {SHAPE_M1_fill,
SHAPE_M2_fill}
}
}}
Figure 56 Spacing Between Multilayer Patterns With Allowed Spacing Values Smaller Than
Space Values Example
Figure 57 Using the Allowed Spacing Values in the fill_to_fill_spacing Option Example
Note:
The spacing is always applied to the bounding box of every pattern.
Figure 58 shows an application that uses 3 patterns, A, B, and C. The spacing between
the patterns is expected to be as follows:
Pattern A:
• Space to Pattern B = 2
• Space to Pattern C = 1
Pattern B:
• Space to Pattern A = 2
• Space to Pattern C = 3
Pattern C:
• Space to Pattern A = 1
• Space to Pattern B = 3
// Second Pattern B
{type = UF_POLYGON,
polygon_fill = {
pattern_spec = {
space_x = 0.1, space_y = 0.1,
other_pattern_spacing = { 0 => >= 2, 2 => >= 3}
},
layers = {SHAPE_B}
}
},
// Third Pattern C
{type = UF_POLYGON,
polygon_fill = {
pattern_spec = {
space_x = 0.1, space_y = 0.1,
• The space_x and space_y values can be less than or equal to 0 (<= 0) if
the type option is UF_POLYGON, UF_STACK or UF_CELL, UF_EXPANDABLE, or
UF_EXPANDABLE_CELL. In addition for UF_STACK and UF_CELL, the separate_patterns
option must be false, and the grouping option must be NONE.
• Corner checking, using the min_space_corner and extension options, is disabled if
either the space_x or space_y option is negative. By default, a corner spacing check
between fill patterns in different fill regions is based on max(space_x, space_y).
layer_spec = {
output_layer_key = "M2_fill",
fill_to_signal_spacing = {
{aM2, min_space = 0.1},
{aM2_FILL, min_space = 0.5}
}
}
polygons = DATA_M2_fill,
};
fill_patterns = {{
type = UF_POLYGON,
polygon_fill = {
layers = {SHAPE_M1_fill,
SHAPE_M2_fill},
fill_to_signal_spacing = {
{aBLKG_1, min_space = 0.3},
{aBLKG_2, min_space = 0.2}
}
}
}}
For fill patterns of UF_POLYGON type, the multilayer pattern is not broken down to smaller
variations of the pattern. As shown in Figure 59, the pattern is repeated in the target
region by keeping a minimum distance from the design layers as specified in the
fill_to_signal_spacing option.
By default, the unified_fill() function measures spacing from the edge of the design
layer to the bounding box of the pattern. The keepout region for the pattern is constructed
using all constraints of the fill_to_signal_spacing option defined for this pattern, as
well as for every individual layer definition in that pattern.
There are two methods of measuring spacing between fill and signal layers. The
spacing_context option controls which method of measurement is used for a given fill
pattern:
• When set to UF_PATTERN, spacing is measured from the signal to the bounding box of a
fill pattern.
• When set to UF_LAYER, spacing is measured from the signal to individual polygons of a
layer inside the pattern.
Note:
When the space_x and space_y values are negative, the spacing_context
option setting of UF_PATTERN is not supported.
}
polygons = DATA_ M1_fill,
};
SHAPE_M2_fill : polygon_layer_s = {
layer_spec = {
output_layer_key = "M2_fill",
fill_to_signal_spacing = {
{aM2, min_space = 0.1}
}
}
polygons = DATA_M2_fill,
};
fill_patterns = {{
type = UF_POLYGON,
polygon_fill = {
layers = {SHAPE_M1_fill,
SHAPE_M2_fill}
}
}}
Example 20 fill_to_signal_spacing Defined Both in the layer_spec Option and in the Pattern
SHAPE_M1_fill : polygon_layer_s = {
layer_spec = {
output_layer_key = "M1_fill",
fill_to_signal_spacing = {
{aM1, min_space = 0.5}
}
}
polygons = DATA_ M1_fill,
};
fill_patterns = {{
type = UF_POLYGON,
polygon_fill = {
layers = {SHAPE_M1_fill,},
fill_to_signal_spacing = {
{aM1, min_space = 0.1}
}
}
}}
In Example 21, as shown in Figure 62, multiple design layers are used with
fill_to_signal_spacing values at different places in the fill layer definitions and the fill
pattern definition.
fill_patterns = {{
type = UF_POLYGON,
polygon_fill = {
layers = {SHAPE_M1_fill,
SHAPE_M2_fill},
fill_to_signal_spacing = {
{aBLKG, min_space = 5.0}
}
}
}}
fill_patterns = {{
type = UF_POLYGON,
polygon_fill = {
layers = {SHAPE_M1_fill}
}
}}
Polygon Grouping
For UF_STACK and UF_CELL fill types, the unified_fill() function can break the pattern
definition into smaller permutations. If a pattern definition has two fill layers, for example,
M1_fill and M2_fill, there can be three permutations:
• Complete pattern
• Pattern with only M1_fill
• Pattern with only M2_fill
When an individual layer definition of a fill layer has multiple polygons, you can configure
the unified_fill() function to break the layer definition further, so that either all or none
of the fill-layer polygons are retained when pattern placements cause spacing violations.
When multiple polygons constitute a layer definition in a pattern, the unified_fill()
function retains, by default, all polygons that meet fill-to-signal spacing in every repetition
of the pattern.
As shown in Figure 63, a multilayer pattern, indicated by dashed lines, is repeated inside
a target region. The spacing_context option is UF_LAYER. The pattern consists of two fill
layers, each composed of two rectangles. Based on fill-to-signal spacing of the fill layer
to the design layer, every instance might have either all of the polygons or a partial set of
polygons. Polygons causing spacing violations are removed.
Alternatively, you can configure the unified_fill() function in such way that if any
polygon of a fill layer inside a pattern repetition causes spacing violations, then all
polygons of that fill layer are removed from that specific pattern repetition. For this
behavior, set the grouping option to ALL in the definition of the fill layer. For example, as
shown in Example 23 and Figure 64, both polygons of the M1_fill fill layer are removed
from the top right pattern repetition, even though only one polygon violates the spacing.
• When the space_x and space_y values are negative, the grouping option setting of
ALL is not supported.
• The grouping option does not apply to the UF_POLYGON and UF_ADJUSTABLE types.
Use the signal option only with the UF_ADJUSTABLE and UF_POLYGON types. With the
UF_POLYGON type, the pattern must be a single layer with single shape. You must set
the fill_context option to SIGNAL_CONTEXT in the definition of the fill_patterns
argument. With the ring_count option of the signal option, you can control the number
of rings of fill pattern shapes that surround each design layer shape. In Figure 65, the ring
count is 2. Figure 66 and Example 24 show an example with a ring count of 4.
fill_patterns = {{
type = UF_POLYGON,
polygon_fill = {
pattern_spec = {space_x = 0.1,
space_y = 0.1},
layers = {SHAPE_M1_fill},
fill_context = SIGNAL_CONTEXT,
signal = {layer = aM1,
ring_count = 4}
}
}}
You can use the signal option with the width-dependent spacing functionality that is
described in the Range and Width Dependent Spacing section. Use this functionality in
applications that require the fill shapes to be kept away from the design layer shape by
different minimum space values that depend on the width of the design layer shape. Fill
pattern rings are generated using all spacing constraints.
The following rules and restrictions apply for signal aligned fills:
• The signal option cannot be used with the UF_STACK or UF_CELL type.
• The signal option cannot be used when pattern definitions have multiple fill layers or if
the fill-layer definition has multiple polygons.
• The stagger_y option is not supported with the signal option. The stagger_y option
is ignored.
• The fill_context option must be SIGNAL_CONTEXT. With this setting, the partition,
insertion, and pitch options are ignored.
• The width_based_spacing option is not supported for the design layer specified in
the signal option. Use the min_space option in fill_to_signal_spacing to set the
minimum spacing between the fill layer and the design layer specified in the signal
option.
For a detailed description of the insertion option, see the unified_fill() function in
the IC Validator Reference Manual.
• Specify GLOBAL to place the fill shapes by using the lower-left of the entire database
extents as the reference.
• Specify LOCAL to place fill shapes by allowing every polygon on the input layer to
have its own unique reference grid point.
The reference grid point is the lower-left of the bounding box of every individual polygon
belonging to the input layer.
With the unified_fill() function, in addition to the GLOBAL and LOCAL grid modes, you
can enforce more complex control over the location of the pattern grid by referencing
another layer. Along with the target layer, you can also specify a context layer. When a
context layer is used, fill patterns are placed inside the target layer polygons in such a way
that the lower-left vertex of the bounding box of the fill patterns aligns with the grid points
originating from the lower-left vertex of the bounding box of each individual polygon on the
context layer as the reference grid point. A context layer polygon can encompass multiple
polygons on the target layer.
Figure 69 shows a comparison of the grid_mode argument of the fill_pattern()
function with the fill_context option of the unified_fill() function.
Example 26 and Figure 70 show the behavior of the pitch option of the unified_fill()
function. In this example, the pitch option is applied only to the y-direction grid point. The
fill patterns in a given target layer polygon are placed starting from the grid point that is
closest to the lower-left vertex of the bounding box of the specific target layer polygon,
satisfying the specified pitch value.
polygon_fill = {
pattern_spec = {space_x = 0.1,
space_y = 0.1},
layers = {SHAPE_M1_fill},
pitch = {y = 0.1,
context_layer = aFILL_CONTEXT_LAYER}
}
}}
• The pitch value must correlate with the pattern dimension and the space_x and
space_y values
For example, the following value must be a multiple of the pitch value; otherwise, the
resultant fill might be missing patterns:
((pattern dimension in x-direction) + space_x)
For a higher insertion rate, you can increase the shift_factor value of the insertion
option. The unified_fill() function automatically adjusts the fill patterns inside the
target layer polygons such that they align with the grid lines of the context layer polygon.
The shift_factor option is useful when polygons on the target layer are complex,
nonrectangular shapes. Example 27 shows how to use the shift_factor option.
Criteria Analysis
With the unified_fill() function, you can customize the fill procedure for metrics such
as density. The tool supports equation-based criteria analysis to meet requirements such
as density ranges, target density, and uniform gradient density across the design. The
unified_fill() function uses the same algorithms as the density() function, ensuring
that the fill results meet the postfill design rule requirements.
Use the following arguments of the unified_fill() function to configure the fill
procedure to meet your requirements:
criteria
delta_window
delta_x
delta_y
window_layer
boundary
The criteria option supports the following options to configure the fill procedure:
target
gradient
fill_layer_keys
design_layers
design_layers_hash
window_function
The string names specified in the output_layer_key option of the pattern definition are
used to identify the fill layers used in criteria equations. You can specify any number of
criteria equations in the criteria argument. If you have multiple criteria equations for
the same fill layer, the tool ensures that at least one of the equations is met. If there are
multiple criteria equations for different fill layers (for example, aM1_FILL and aM2_FILL
each have a criteria equation defined), then the tool addresses all the equations.
When the criteria argument is specified without a window function, the tool evaluates
a default equation to check against the criteria value. This default equation is evaluated
for every delta window and checked against the target and gradient values of the
criteria argument. The fill patterns are evaluated to converge the evaluated value to the
specified value. The default equation is:
[(Area of fill layers) + (Area of design layers)] / (Area of delta window)
Where,
• Area of fill layers is the sum of areas within the current delta window of all polygon data
on all the fill layers that are specified in the fill_layer_keys option.
• Area of design layers is the sum of areas within the current delta window of all polygon
data on all the design layers that are specified in the design_layers option.
Note:
Pattern definitions must generate fill output that allows the criteria to contribute.
For example, the tool cannot meet a higher target density when the provided
pattern definitions are composed of narrow, widely-spaced rectangles.
Example 28 shows a fill specification for two fill layers, M1_fill and M2_fill, that have their
own criteria equations. The cumulative density of the existing aM1 design layer and M1_fill
fill layer should be between 25 percent and 45 percent with a target of 35 percent, when
measured in 100 μm x 100 μm windows stepped across the design in increments of 50 μm
along the x- and y-directions. The M2_fill fill layer has a similar fill specification with the
same density values.
SHAPE_M1_fill : polygon_layer_s = {
layer_spec = {output_layer_key = "M1_fill"}
polygons = DATA_ M1_fill,
};
SHAPE_M2_fill : polygon_layer_s = {
layer_spec = {output_layer_key = "M2_fill"}
polygons = DATA_ M2_fill,
};
fill_patterns_m1_m2 = {{
type = UF_POLYGON,
polygon_fill = {
layers = {SHAPE_M1_fill,
SHAPE_M2_fill}
}
}}
M1_density_criteria : fill_criteria_s =
{
target = {range = [0.25, 0.45], target = 0.35},
fill_layer_keys = {"M1_fill"},
design_layers = {aM1}
};
M2_density_criteria : fill_criteria_s =
{
target = {range = [0.25, 0.45], target = 0.35},
fill_layer_keys = {"M2_fill"},
design_layers = {aM2}
};
uf_fill_output_h = unified_fill(
fill_patterns = fill_patterns_m1_m2,
criteria = {M1_density_criteria,
M2_density_criteria},
delta_window = {100, 100},
delta_x = 50,
delta_y = 50,
fill_boundary = {type = LAYER, layer = data_extent},
window_layer = aWINDOW,
grid = 0.001,
boundary = ALIGN
);
M1_fill_output = uf_fill_output_h["M1_fill"][0];
M2_fill_output = uf_fill_output_h["M2_fill"][0];
Example 29 is another fill example that uses multiple design layers and fill layers in the
same criteria equation. The sum of the densities in every window of M1, M2, and M3 for
both design and fill should be greater than or equal to 105 percent.
Note:
For simplicity, Example 29 shows only the criteria equations.
Example 29 Fill Specification With Multiple Layers in the Same Criteria Equation
M1_M2_density_criteria : fill_criteria_s =
{
target = {range = >= 1.05},
fill_layer_keys = {"M1_fill", "M2_fill", "M3_fill"},
design_layers = {aM1, aM2, aM3}
};
For a specific fill layer, when there is more than one fill pattern specified, the
unified_fill() function sequentially inserts the fill patterns in the order that the
patterns are declared in the function. When a criteria is specified, it is possible that the
unified_fill() function does not make use of all the fill pattern definitions. That is, the
unified_fill() function stops inserting fill as soon as the criteria is met regardless of
whether all the fill patterns have been used.
In Example 30, the procedure for the M1_fill fill layer is composed of two insertion
sequences. After inserting the first pattern, if the criteria is met, the tool does not perform
the second sequence.
window_layer = aWINDOW,
grid = 0.001,
boundary = ALIGN
);
M1_fill_output = uf_fill_output_h["M1_fill"][0];
Instead of using the default criteria equation, you can specify your own criteria by defining
the window function as shown in Example 31.
See the Unified Fill Utility Functions section in the “Utility Functions” chapter of the
IC Validator Reference Manual for more information about the utility functions you can use
in the window function.
The following rules and restrictions apply:
• Multiple fill layers cannot be used in a single window function
• If target is not specified, the unified_fill() function uses the midpoint of the range
as its internal target
• A valid constraint that converges must be specified. For example, the following
constraint is not allowed
target = {range = <0.50}
The following examples show the behavior based for several scenarios:
• target = {range = >0.50}
The unified_fill() function tries to achieve a target density greater than 50 percent.
• target = {range = [0.45, 0.55]}
The allowed range is greater then 30 percent and less than or equal to 100 percent.
The tool works toward achieving a target density of 50 percent, but not exceeding 50
percent.
Output Layers
The unified_fill() function returns a hash containing layer lists, with each list having
three polygon layers:
0 Uncolored
To retrieve a list of polygon layers, you specify the output_layer_key value that was
used by the layer_spec option of the fill pattern definition. If the color option is false,
only uncolored data is generated on index 0. If the color option is true, then the two
colored output layers are returned on indexes 1 and 2, respectively. For example,
M1_fill_ALL = M1_fill_h["M1_fill"][0];
M1_fill_COLOR1 = M1_fill_h["M1_fill"][1];
M1_fill_COLOR2 = M1_fill_h["M1_fill"][2];
Example 32 is a procedure for two fill layers, M1_fill and M2_fill, that shows how the fill
layers are accessed from the output hash generated by the unified_fill() function.
M1_fill_output = uf_fill_output_h["M1_fill"][0];
M2_fill_output = uf_fill_output_h["M2_fill"][0];
Example 33 shows a fill specification for the M1_fill fill layer that has two pattern insertion
steps. Each pattern definition has a specified color. The unified_fill() function creates
fill shapes in each pattern insertion step by coloring them as individual sets. Furthermore,
both pattern insertion steps use the same string name in the output_layer_key option
to designate the fill layer. The unified_fill() function accumulates all polygon data
from different pattern definitions into the same polygon layer list element in the output hash
key of that fill layer. In this example, the value of list index 0 of the M1_fill output hash key
has complete polygon data from both pattern insertion steps. The value of list index 1 has
complete color 1 polygon data from both pattern insertion steps. Similarly, the value of list
index 2 has complete color 2 polygon data from both pattern insertion steps.
M1_fill_ALL = M1_fill_h["M1_fill"][0];
M1_fill_COLOR1 = M1_fill_h["M1_fill"][1];
M1_fill_COLOR2 = M1_fill_h["M1_fill"][2];
Apart from the fill layers, you can also retrieve the bounding box, that is, the extents,
of one or more output fill layers by specifying one or more of the appropriate
output_layer_key values. Example 34 and Figure 71 show the usage of the
extents_output option.
M1_fill_h = unified_fill(
fill_patterns = {{
{type = UF_POLYGON,
polygon_fill = {
layers = {SHAPE_M1_fill},
color = true
}
}
},
extents_output = {
{output_layer_key = "M1_fill_boundary",
fill_layer_keys = {"M1_fill"}
}
}
);
M1_fill = M1_fill_h["M1_fill"][0];
M1_fill_E = M1_fill_h["M1_fill_boundary"][0];
When the output_layer_key option is not specified in the extents_output option, the
bounding box of the pattern is constructed by considering all the fill layers present in the
pattern placement. For patterns that are generated by the UF_STACK or UF_CELL type, the
extents_output option works in accordance with the separate_patterns option.
Layer Compression
The geometry data on output polygon layers generated by the unified_fill() function
can be grouped hierarchically to reduce the size of the output file created by functions
such as write_gds() and write_milkyway(). Layer compression can be achieved in
two ways in an IC Validator runset:
• Specify the hierarchical_fill option inside pattern definitions of the
unified_fill() function.
10
Working With Edges
This chapter gives a brief overview of using edges in IC Validator runsets. The edge
functions are described in the icvrefman.ditamap.
The IC Validator tool detects the violation from e2 to e3 and avoids the violation from e1 to
e3.
Topological Operations
Topological operations use the complete edges selected and do not add new endpoints.
These operations include
• adjacent_edge() and not_adjacent_edge()
Logical Operations
Logical operations generate new endpoints. These operations include
• and_edge()
• not_edge()
• or_edge()
• xor_edge()
Edge Manipulation
These functions create edges, which are always orphan edges:
• move_edge()
• snap_edge()
• extend_edge()
• copy_edge()
11
DRC Error Classification
This chapter explains the DRC error classification capabilities of the IC Validator tool and
how to use this feature.
The DRC error classification capabilities are described in the following sections:
• Using DRC Error Classification
• DRC Error Classification Flow
• Using PERC Error Classification
IC Validator command. This avoids the possibility of errors from different commands within
the same rule interacting in the cPYDB and preventing a hierarchical match when they are
considered for an individual command in a subsequent run. Violation comments should be
made unique to the command to ensure this relationship.
It is also important to clean up invalid cPYDB data. For example if the error shapes
change, but the new shapes are considered valid and are to be waived, the cPYDB entries
for that violation and cell should be purged and recreated with the new error shapes.
Leaving the old, invalid shapes in the cPYDB can prevent the IC Validator tool from finding
a hierarchical match.
Currently, error classification is most applicable to DRC violations. Matching of previously
classified errors of other types of violations, including ERC, device extraction, and density
is not supported.
by default, reside in run_details/pydb unless a different path is specified in the db_path db_path, settingerror databasepath
1. Run the IC Validator tool on the design. The resultant DRC errors can then be
classified using VUE or the pydb_report utility.
For reuse of classification data in future runs, export the classified errors to an error
classification database using VUE or the pydb_export utility.
2. When the IC Validator tool is run again, specify the error classification database
in the match_errors argument of the error_options() function. If multiple error
classification databases are specified, each one is searched for a matching error in
the order specified. The error is pre-classified with the data in the error classification
database it matches first, and it is written to the error database for the current run. The
error shows as classified in the LAYOUT_ERRORS file and in VUE.
Using the match_errors argument of the error_options() function set to the
suppress_matched_errors option you can specify errors that are not written to the
error database.
Note:
The error_limit_per_check argument of the error_options()
function only applies to new errors. All errors produced during this run
are checked against the imported error classification database list, and
all matching errors are written to error database. Errors that do not
3. Finally, new classifications from the current run or from other runs can be merged into
an existing error classification database. Newly classified errors are appended to the
database, while existing errors are updated if they have changed.
The input file to the pydb_report utility is in a standard comma-delimited format. See
pydb_report Utility on page 289 for more information. The file used in this example is
INVH,,Ignore,INVH is a known good cell,,,,
,Met1 Spacing must be >= 3,Waive,Done to fit tight routing,,,,
TGATE,,Watch,These need to be fixed,0,0,10,10
The pydb_report utility writes a log file, pydbreport.log, to the current working directory.
The same information is displayed on screen. The output from this example is shown in
Example 35.
Class: Watch
Comment: These need to be fixed
12 new errors classified.
Classify time = 0:00:00 User=0.01 Sys=0.00 Mem=0.000 GB
Overall Classify time = 0:00:00 User=0.01 Sys=0.01 Mem=0.000 GB
Done.
You can get a report of the classified errors in the error database for the current run using
the pydb_report utility:
pydb_report -pydb_name PYDB_AD4FUL \
-pydb_path ./run_details/pydb \
-show_classified file=classified.txt
Example 36 shows the first error record from the classified error report (classified.txt) for
this example.
Class Summary:
Class Errors
------------- ------
Watch 6
Command: runset.rs:101:enclose
- - - - - - - - - - - - - - - - - - - - - - - -
( lower left x, y ) ( upper right x, y) Distance
- - - - - - - - - - - - - - - - - - - - - - - -
(3.0000, 4.0000) (4.0000, 5.1180) 1.0000
* Class: Watch
* Created by: juser on 2008-10-21 08:36:19
* Last Modified by: juser on 2008-10-21 08:36:19
* Comment by: juser on 2008-10-21 08:36:19
These need to be fixed
This example creates a new error classification database called CPYDB_AD4FUL in the ./
cpydb directory. This database can be used in subsequent IC Validator runs to retain the
error classification data. The pydb_export utility produces a log file, pydbexport.log, in the
current directory. The output from this example run is shown in Example 37.
Export Successful.
PYDB_Export is done.
After the IC Validator tool runs, notice in the new LAYOUT_ERRORS file that the errors
were pre classified using data from the error classification database.
• The error summary section contains error counts broken out by classification tag.
• The error details section groups errors by their classification tag.
Note:
The pydb_report utility can be used to regenerate the LAYOUT_ERRORS file
from the error database at any time. Also, you can use the pydb_report utility to
generate the classified error report to show detailed classification information,
as shown in Example 38 and Example 39.
Example 38 shows the error summary section.
----------------------------------------------------------------------
toxcont must be enclosed by met1 by more than 1.0 without touching or
overlapping
----------------------------------------------------------------------
run2.rs:107:enclose
Structure ( lower left x, y ) ( upper right x, y) Distance
- - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TGATE (21.5000, 23.5000) (22.0000, 24.9140) 0.5000
TGATE (21.5000, 19.0860) (22.0000, 20.5000) 0.5000
TGATE (21.5000, 20.5000) (22.0000, 23.5000) 0.5000
The log file from the pydb_report utility is shown in Example 40.
Merge the changes into the existing error classification database (cPYDB), using the
pydb_export utility:
pydb_export -pydb_name PYDB_AD4FUL \
-pydb_path ./run_details/pydb -cpydb_name CPYDB_AD4FUL \
-cpydb_path ./cpydb -merge
The log file from the pydb_export utility for the merge step is shown in Example 41.
Running the IC Validator tool again, the LAYOUT_ERRORS file shows that the changes
made to the database are retained.
See the detailed information by running the pydb_report utility:
pydb_report -pydb_name PYDB_AD4FUL \
-pydb_path ./run_details/pydb \
-show_classified file=classified2.txt
pydb_report Utility
Use the pydb_report utility with an error database to
• Generate a detailed report of classified errors.
• Regenerate the LAYOUT_ERRORS file.
• Classify errors.
When an error is classified, the classification tag, UNIX user ID, timestamp, and possible
comment are stored with the error in the error database for the current run.
Additionally, the pydb_report utility can be used on an error classification database to
generate a detailed report of classified errors.
Note:
See Database Naming Conventions on page 280.
This utility is in the IC Validator installation bin directory. Table 53 defines the command-
line options.
Table 53 pydb_report Utility Command-Line Options
-64 Runs the utility with 64-bit coordinates support. This command-line
option might be necessary if coordinates in the error database exceed
the capacity of a 32-bit signed integer, that is, if the IC Validator tool
runs with the -64 command-line option.
-classify input_file Classifies errors in an error database based on the given input file.
See Classify File Format in the following section.
-classify_by_window Specifies how errors are selected for classification based on window
[interacting | coordinates in the classify file. The default is interacting.
enclosed] • interacting. Errors having any interaction with the window
coordinates are selected for classification.
• enclosed. Errors completely enclosed by the window coordinates
are selected for classification. Selected errors can touch the
enclosing window.
-le output_file Generates the LAYOUT_ERRORS file from the current error
database.
-no_error_details Does not report error details (that is, coordinates) in the
LAYOUT_ERRORS or TOP_LAYOUT_ERRORS files.
-show_classified Generates a detailed classified error report for an error database and
[cell=cell] error classification database. The specified parameters can restrict
[violation=comment] the report by violation, cell, and class. If the output file is not specified,
[class=class1,..., output goes to the screen.
classn] [file=file]
Classification Description
Ignore Error that the designer does not care about: to be fixed later in the process flow
Classification Description
Unmatched Error that exists in an imported error classification database but was not
produced by the current run
i Ignore case.
For example, the following line in a classify file results in all violation comments starting
with M1, M2, M3, or M4 in cell Cell1 being classified as ignore:
Cell1,"/^M[1-4].*/",Ignore,"comment"
The following line in a classify file results in violations in all cells containing “sram”,
ignoring case, being waived:
"/.*sram.*/i","",Waive,"comment"
A violation comment that actually begins with a slash cannot be matched with a literal
string, but must use GNU extended regular expression special characters. See the table
of these special characters in the text_options() function in the IC Validator Reference
Manual for more information. For example, the following in a classify file would match a
violation comment containing “/M4 rule 5/”:
"/\/M4 rule 5\//"
Examples
To regenerate TOP.LAYOUT_ERRORS from the current run, use this syntax:
pydb_report -pydb_name PYDB_TOP -pydb_path ./run_details/pydb \
-le TOP.LAYOUT_ERRORS
To classify errors in the error database for the current run, use this syntax:
pydb_report -pydb_name PYDB_TOP -pydb_path ./run_details/pydb \
-classify classify.csv
To show classification details on screen of all ignored and waived errors in the cell TOP,
use this syntax:
pydb_report -pydb_name PYDB_TOP -pydb_path ./run_details/pydb \
-show_classified cell=TOP class=Ignore,Waive
To show classification details of all waived errors in an error classification database, use
this syntax:
pydb_report -cpydb_name CPYDB_TOP -cpydb_path ./cpydb \
-show_classified class=Waive
pydb_export Utility
Use the pydb_export utility to export error classification data from an error database to
either
• Create a new error classification database (cPYDB). See Error Classification Database
Format on page 297.
• Merge into an existing error classification database by appending new errors and
updating existing ones.
Also, the pydb_export utility can be used to manage error classification databases by
merging several error classification databases together or by purging classification data
from violations and cells.
For an error to be considered equivalent between runs, the violation comments must
match exactly or match up to the first occurrence of a specified comment delimiter, and the
error polygon must match at cell level.
Note:
See Database Naming Conventions on page 280.
This utility is in the IC Validator installation bin directory. Table 56 defines the command-
line options.
-cell cell Limit exports to errors of the given cell. This command-line option
can be given more than one time to specify multiple classes. It
supports regular expressions.
-exclude_cell cell Excludes errors in the given cell from export. This command-line
option can be given more than one time to specify multiple
classes. It supports regular expressions. The -exclude_cell
command-line option takes priority over the -cell command-line
option.
-flatten layout=layout Flattens the output error classification database to the given top
cell=cell cell using the given layout. The layout and top cell used to create
the error classifications being flattened should be given as the
layout and cell parameters to this option. The layout format is
auto-detected and is given as a path using the same conventions
as the library1 and library2 command-line options of the LVL
utility. Only violations in and hierarchically underneath the given
cell will be output. Layout grid and drawn checks on original cells
are not flattened. See Flattening an Error Classification Database
on page 299 for more information.
-merge Merge the specified error database into the specified error
classification database, updating classification of existing errors
or adding new errors to the database.
-purge purge_file Purge all classification data from violations and cells based on
the given input file. See Purge File Format on page 297 for file
format information.
If any error databases reference the target error classification
database and errors matched the violations and cells that are
being purged, those errors are no longer viewable.
-set_password password Set the password on the given error classification database.
Specify prompt to read the password from the command line.
-violation violation Limit exports to the given violation comment. This command-line
option can be given more than one time to specify multiple
classes. It supports regular expressions.
Examples
To create a new error classification database (cPYDB) containing classification data from
the current run, use this syntax:
pydb_export -pydb_name PYDB_AD4FUL \
-pydb_path ./run_details/pydb \
-cpydb_name CPYDB_AD4FUL -cpydb_path ./cpydb \
-create
To merge classification data into an existing error classification database that is password
protected, use this syntax:
pydb_export -pydb_name PYDB_AD4FUL \
-pydb_path ./run_details/pydb \
-cpydb_name CPYDB_AD4FUL -cpydb_path ./cpydb \
-merge -password password
To convert an error classification database with 32-bit coordinates to one with 64-
bit coordinates in place, use the following syntax. When using the -convert_to_64
command-line option, you need exclusive access to the database; that is, the database
cannot be open by VUE, any other scripts, or IC Validator jobs.
pydb_export -cpydb_name db name -cpydb_path db path -convert_to_64
To copy an error classification database and convert it to 64-bit without modifying the
original database, use this syntax:
pydb_export -cpydb_name db name -cpydb_path db path \
-copy_cpydb name=input db name path=input db path \
-convert_to_64
The pydb_report utility can provide information about the coordinate width of an existing
error classification database:
pydb_report -cpydb_name CPYDB_AD4FUL64 -cpydb_path ./cpydb
...
To export only the classes starting with “W” in all cells that do not start with "W", use this
syntax:
pydb_export -class "/^W.*/" -exclude_cell "/^W.*/" ...
To export all classes except “Ignore” and “Waive”, use this syntax:
pydb_export -exclude_class Ignore -exclude_class Waive ...
• GDSII: /path/to/file.gds
• OASIS: /path/to/file.oas
• Milkyway: /library/path/LIBRARY_NAME
• OpenAccess: /path/to/lib.defs/LIBRARY_NAME
• NDM: /library/path/LIBRARY_NAME
The flattening process can be distributed using the -host_init command-line option of
the pydb_export utility. The following use cases are supported.
To flatten during export from PYDB:
pydb_export \
-pydb_path path -pydb_name name \
-cpydb_path output_path -cpydb_name output_name \
-create \
-flatten layout=layout cell=topcell \
[-host_init number]
In addition to the cell name and device instance and model names, the error instance
has a Comment field that can be unique for each error instance, all of its pin names and
associated nets, possibly a set of reference objects (cell instances, device instances,
nets), whether the device was a result of merging, tags, and so on. Net errors have a
similar set of metadata. Errors are matched at the cell level based on the following criteria:
Device Errors
• Violation Comment / Rule
• Cell Name
• Device Name
• Device Type
• Comment
Net Errors
• Violation Comment / Rule
• Cell Name
• Net Name
• Comment
If all of these criteria of a new error match a classified error in the cPYDB, IC Validator
applies the classification to the new error.
12
Error Management Utility
This chapter explains the error management utility that is available with the IC Validator
tool.
The icv_pydb utility provides command-line options to enable any selection by cell (either
individually or hierarchically), violation comment, error classification, or cell-level bounding
box.
This utility is in the IC Validator installation bin directory. Table 57 defines the command-
line options.
Table 57 Error Management Utility Command-line Options
-c top_cell Specifies the top cell of the layout. If this option is not given, the
top cell of the input PYDB will be used.
-classify classify file Classifies errors in the input database based on the given input
file. This command-line option can only be used to modify a
PYDB in place. See the “Classify File Format” section for more
information.
-exclude_heat_map Does not copy heat maps to the output database. This
command-line option can be used if the heat map grids of the
same cell in the input database and output database do not
align to avoid errors.
-format GDSII|OASIS Indicates that the output file refers to a layout to be exported
from the input database, and it specifies the format.
-layout original_layout Specifies the original layout from which the design hierarchy
is read when exporting errors to a layout. If this command-line
option is not given, the layout path will be taken from the PYDB.
An input layout is required for the layout export flow to work,
whether or not this argument is given.
-list_cpydbs Shows a list of cPYDB databases that are used to match error
classifications in the input PYDB.
-rename_cpydb_paths pattern Changes any referenced cPYDB path matching the given
new_path pattern to the specified new cPYDB path. Only the path
component of the referenced cPYDB is updated. This
command-line option can be used only to modify a PYDB in
place.
-replace_db_path Replaces the database path stored in the input cPYDB with
new_db_path the given value. This operation may only be used to modify
a cPYDB in place, and it requires that no other users be
connected to the database.
-replace_user_ids Globally replaces all user IDs in the input cPYDB with the given
new_user_id value. This operation may only be used to modify a cPYDB in
place, and it requires that no other users be connected to the
database.
-sl Specifies only those errors that are derived from the given
layernum[-range] layers. For OpenAccess layouts, the layers are specified by
[,datatype[-range]] ... layout layer and purpose name, instead of layer and datatype
-select_layers number. Layer information is available only if IC Validator was
layernum[-range] run with the -vue command-line option.
[,datatype[-range]] ...
-sln layer_name Specifies only those errors that are derived from runset
-select_layer_names layer names matching the given pattern. Layer information
layer_name is available only if IC Validator was run with the -vue
command-line option.
-ul Specifies that errors which are derived from the given layers are
layernum[-range] not included. For OpenAccess layouts, the layers are specified
[,datatype[-range]] ... by layout layer and purpose name, instead of layer and datatype
-unselect_layers number. Layer information is available only if IC Validator was
layernum[-range] run with the -vue command-line option.
[,datatype[-range]] ...
-uln layer_name Specifies that errors which are derived from the given layers are
-unselect_layer_names not included. Layer information is available only if IC Validator
layer_name was run with the -vue command-line option.
-select_cell cell Includes only those cells which match the given pattern. This
command-line option does not include any cells placed at any
level underneath the matching cells.
-unselect_cell cell Does not include cells which match the given pattern. This
command-line option does not exclude any cells placed at any
level underneath the matching cells.
-select_cell_hierarchical Includes only those cells which match the given pattern. This
cell command-line option also includes all descendants of the
matching cells.
-unselect_cell_ Does not include cells which match the given pattern. This
hierarchical cell command-line option also excludes all descendants of the
matching cells.
-select_interacting Includes only those errors in the given cell which interact
cell,x1,y1,x2,y2 with the given window. This command-line option is cell level
and does not apply to any errors underneath the given cell
hierarchically. Window selections are combined so that errors
included by any of the window selections in a cell, but do not
match any unselections, are included.
-unselect_interacting Excludes errors in the given cell which interact with the given
cell,x1,y1,x2,y2 window. This command-line option is cell level and does not
apply to any errors underneath the given cell hierarchically.
-select_enclosed Includes only those errors in the given cell which are enclosed
cell,x1,y1,x2,y2 by the given window. This command-line option is cell level
and does not apply to any errors underneath the given cell
hierarchically. Window selections are combined so that errors
included by any of the window selections in a cell, but do not
match any unselections, are included.
-unselect_enclosed Excludes errors in the given cell which are enclosed by the
cell,x1,y1,x2,y2 given window. This command-line option is cell level and does
not apply to any errors underneath the given cell hierarchically.
-rename_cells Changes cell names in the output according to the given CSV
rename_cells_file file, with the first column being the old cell name and the second
column being the new cell name.
This command-line option can be used to modify the input
database in place if there is no output database specified. If
modifying a database in place, this operation requires that no
other users be connected to the database.
-violation_map violation Specifies a file that maps rules and commands toa layer and
mapping file datatype pair in the output layout. This CSV-formatted file
includes the following columns:
“Violation Comment”,”Command”,”Layer”,”Datatype”.
Wildcards are supported for the violation comment and
command columns. The command column is optional and can
be left blank, but if present, will select only the errors when the
runset file:line:function match. By default, errors without an entry
in this file are not output.
If a violation mapping file is not given, rules are assignedan
arbitrary layer and datatype.
-vue output VUE file Creates the given VUE file which points to the output PYDB.
-vue_template Uses the given input VUE file as a template to create the output
input_vue_file VUE file.
-dbu microns Optionally specifies a value in microns for the database unit of
the output database. This value is used when creating a new
database, or if merging into an existing database, it can be used
to verify the DBU of the merged database. If this command-line
option is not specified, the DBU of the input database is used
when creating a new output database.
When exporting to an output layout, if this argument is given,
it is used as the DBU of the output layout. If this argument is
not given, the DBU of the original input layout (which might be
coarser than the DBU of the input databases) is used.
-file command file Use the given XML command file to control tool operation.
The results might be split into three error databases PYDBs), one for each subblock. For
example:
icv_pydb -i run_details/pydb/PYDB_TOP -select_cell_hierarchical
“Block1”\
-o new_pydb/PYDB_Block1 -vue Block1.vue -vue_template TOP.vue
In this example, the top cell of each output database is still TOP. To make the lower-level
block the top cell, the -new_top_cell command-line option can be used.
icv_pydb -i run_details/pydb/PYDB_TOP -select_cell_hierarchical \
-new_top_cell TOP,Block1 \
-o new_pydb/PYDB_Block1 -vue Block1.vue -vue_template TOP.vue
Selection by violation comment and by bounding box is also allowed. For example, to split
out only the metal1 rules with no cell selections, the tool might be run as follows
icv_pydb -i run_details/pydb/PYDB_TOP -svc "M1" \
-o new_pydb/PYDB_TOP_m1 -vue TOP_m1.vue -vue_template TOP.vue
However, notice that any heat maps that were at the top level are not copied in this case,
since TOP no longer exists in the output database.
Since PYDB does not contain cell placement data, bounding-box selection is valid only
within a cell, and there is no looking down the hierarchy. For example, to include top-level
data over block1 when the extents of the block1 placement are (100,200),(300,400), the
tool might be run this way:
icv_pydb -i run_details/pydb/PYDB_TOP \
-select_cell_hierarchical "Block1 \
-select_interacting TOP,100.0,200.0,300.0,400.0 \
-o new_pydb/PYDB_Block1 -vue Block1.vue -vue_template TOP.vue
Selection can also be done by error classification using several command-line options to
include or exclude errors of certain classifications, and also to drop error classification data
from the output.
If merged_pydb/PYDB_TOP does not exist, it is created and subsequent runs merge data
into the existing database. Alternatively, they may be combined on the command line:
icv_pydb -i metal1run/run_details/pydb/PYDB_TOP \
-i metal2run/run_details/pydb/PYDB_TOP \
-o merged_pydb/PYDB_TOP
When merging multiple databases with different top cells, provide a (single instance)
translation to the common top cell, so that “highlight in top cell” works properly in VUE.
This can be done using the -new_top_cell command-line option. For example,
icv_pydb -i block1run/run_details/pydb/PYDB_Block1 \
-new_top_cell Block1,TOP,100,100,0,0 -o merged_pydb/PYDB_TOP
icv_pydb -i block2run/run_details/pydb/PYDB_Block2 \
-new_top_cell Block2,TOP,200,100,0,0 -o merged_pydb/PYDB_TOP
It is possible that the PYDBs being merged could be at different DBU resolutions.
When merging into an existing PYDB, the icv_pydb tool will always use the DBU of the
destination database. However, you can specify a DBU on the command line so that,
when creating a new PYDB, the specified DBU is used, and when merging into an existing
PYDB, the DBU of the existing database can be verified.
The -dbu command-line option allows the DBU to be given as a floating-point value in
microns. If merging into an existing database with a different DBU from the value given on
the command line, an error is issued and the tool exits.
Note:
Runset functions are not combined during merging. If two similar runset
functions in the same violation are merged, they will exist as separate functions
in the destination database.
For modify-in-place operations, only the input PYDB or cPYDB is specified using the -i
command-line option. There is no output database.
Note:
The -classify command-line option is supported only with PYDB and not
cPYDB.
Hierarchy Considerations
The PYDB stores cell descendant information for every unique cell, including an instance
translation of all its descendants This enables the splitting of a database to properly
update the instance translation to top for “Highlight In Top Cell” functionality.
A common use model might be to completely split out a subblock and make it the new top
cell for the destination PYDB. For example,
icv_pydb -i run_details/pydb/PYDB_TOP -o new_pydb/PYDB_Block1 \
-select_cell_hierarchical "Block1" -vue Block1.vue -vue_template \
TOP.vue \
-new_top_cell TOP,Block1
In this example hierarchy, a new PYDB is created with Block1 as the top cell and all
violations in Block1 and below it are selected. Since PYDB has all of the cell descendant
information, a translation is not required when the new top cell is below the original top
cell. An error is issued if the new top cell is below the original top cell and cell selection
includes anything above the new top cell.
Top-level enclosing tag for the command file XML document. There must be one and
only one <pydb> tag, as everything else is enclosed in it.
• <pydb_operation>
Top-level enclosing tag of a single database operation. For example, a split, merge, or
copy. There might be multiple pydb_operation tags in a file.
• <input_pydb path="path/to/pydb"
field_selection_mode="UNION|INTERSECTION">
Specifies that only layer names matching a given pattern name (wildcards allowed)
are included from this database.
◦ <unselect_layer_names pattern="m2*"/>
Specifies that no layer names matching a given pattern name (wildcards allowed)
are included from this database.
◦ <select_layers range="10-11,0-255"/>
Specifies that only layers matching the given range are included from this database.
◦ <unselect_layers range="10-11,0-255*"/>
Specifies that no layers matching the given range are included from this database.
◦ <select_cell pattern="cellA*"/>
Specifies that only cells matching the given pattern (wildcards allowed) are included
from this database. Equivalent to the -select_cell command-line option, and can
be specified multiple times.
◦ <unselect_cell pattern="cellA*"/>
Specifies that no cells matching the given pattern (wildcards allowed) are included
from this database. Equivalent to the -unselect_cell command-line option, and
can be specified multiple times.
◦ <select_cell_hierarchical pattern="cellA*"/>
Specifies that only cells matching the given pattern (wildcards allowed)
and their descendants are included from this database. Equivalent to the
-select_cell_hierarchical command-line option, and can be specified multiple
times.
◦ <unselect_cell_hierarchical pattern="cellA*"/>
Specifies that only violation comments matching the given pattern (wildcards
allowed) are included from this database. Equivalent to the -svc command-line
option, and can be specified multiple times.
◦ <unselect_violation_comment pattern="cellA*"/>
Specifies that no violation comments matching the given pattern (wildcards allowed)
are included from this database. Equivalent to the -uvc command-line option, and
can be specified multiple times.
◦ <select_enclosed cell="cellname" x1="1.0" y1="2.0" x2="3.0" \
y2="4.0"/>
Specifies that in the given cell, only errors which are enclosed by the given window
coordinates are included. Equivalent to the -select_enclosed command-line
option, and can be specified multiple times.
◦ <unselect_enclosed cell="cellname" x1="1.0" y1="2.0" x2="3.0" \
y2="4.0"/>
Specifies that in the given cell, no errors which are enclosed by the given window
coordinates are included. Equivalent to the -unselect_enclosed command-line
option, and can be specified multiple times.
◦ <select_interacting cell="cellname" "1.0" y1="2.0" x2="3.0" \
y2="4.0"/>
Specifies that in the given cell, only errors which interact with the given window
coordinates are included. Equivalent to the -select_interacting command-line
option, and can be specified multiple times.
◦ <unselect_interacting cell="cellname" "1.0" y1="2.0" x2="3.0"
\ y2="4.0"/>
Specifies that in the given cell, no errors which interact with the given window
coordinates are included. Equivalent to the -unselect_interacting command-
line option, and can be specified multiple times.
◦ <select_classified> ... </select_classified>
Specifies that only errors that match the given list of classifications are included.
Equivalent to the -select_classified command-line option.
◦ <unselect_classified> ... </unselect_classified>
Specifies that only errors that match the given list of classifications are included.
Equivalent to the -unselect_classified command-line option.
◦ <rename_cell old="cellA" new="cellB"/>
Renames a single cell from this input database. Can be specified multiple times.
◦ <rename_cells file="rename_cells.csv"/>
Renames cells from this input database according to the given CSV file. Equivalent
to the -rename_cells command-line option.
◦ <rename_violation old="Rule1" new="Rule2"/>
Renames a single violation from this input database. Can be specified multiple
times.
◦ <rename_violations file="rename_violations.csv"/>
Renames violations from this input database according to the given CSV file.
Equivalent to the -rename_violations command-line option.
◦ <top_cell old="top1" new="top2" dx="1.0" dy="2.0" angle="90" \
reflected="1"/>
Specifies a different destination top cell than the source database, and provides a
translation from the source top cell to the destination top cell. This tag is required
if merging into a PYDB with a different top cell. The instance translations to top
from the source database are updated to the context of the new top cell to enable
highlighting violations in the new top cell.
◦ <select_errors command_id="1" cell_id="2">
<error id="1"/>
<error id="2"/>
...
</select_errors>
Selects individual errors to be included from this input PYDB using numeric
IDs from the source database. If there are no <error> records within the
<select_errors> record, all errors from the command/cell are included. If the
cell_id is not specified, all errors from the command are included. This tag can be
specified multiple times.
◦ <select_errors field="Instance Name" value="I123"/>
<select_errors field="Distance" value="4.5"/>
Specifies that errors where the given string, integer, or double field is equal to the
given value are included. The field name should be the heading of the column
from the LAYOUT_ERRORS file. Field-based selection options are combined
according to the field_selection_mode attribute of the <input_pydb> or
<classify_errors> section in which they occur.
Specifies that errors where the given string field matches the given wildcard
pattern are included. The field name should be the heading of the column from the
LAYOUT_ERRORS file. Field-based selection options are combined according to
the field_selection_mode attribute of the <input_pydb> or <classify_errors>
section in which they occur.
◦ <select_errors field="Distance" min="1.0" max="2.0"/>
Specifies that errors where the given double or integer field is in the given range
are included. The field name should be the heading of the column from the
LAYOUT_ERRORS file. Field-based selection options are combined according to
the field_selection_mode attribute of the <input_pydb> or <classify_errors>
section in which they occur.
◦ <select_errors field="Value" min="1.0" max="2.0" \
when_field="Report" is="Area"/>
Specifies that errors which meet the given selection criteria are included. The
addition of the when_field and is attributes make this selection conditional on
another value in the row. This is useful for functions such as density(), when there
might be many Report and Value pairs for a single error and the selection must be
only for a particular Report. The previous example selects density errors with Area
in the range of 1.0-2.0. The field name and when_field should be the headings of
the appropriate columns from the LAYOUT_ERRORS file. Field-based selection
options are combined according to the field_selection_mode attribute of the
<input_pydb> or <classify_errors> section in which they occur.
Classifies errors in this input PYDB according to the given class and applies the
optional user comment. Any error selection options may be specified inside the
<classify_errors> section and will be combined with selection options that are
outside of the <classify_errors> section. Multiple <classify_errors> sections
can be specified in a single <input_pydb> section.
The optional field_selection_mode attribute controls how <select_errors>
tags based on field values are combined inside this <classify_errors> section.
The default is UNION, which means that errors which meet any <select_errors
field...> criteria will be included. If INTERSECTION is specified, then only errors
which meet all of the <select_errors field...> criteria will be included.
◦ <exclude_heat_map/>
Specifies that heat map data is not be copied from the input database to the output
database. Equivalent to the -exclude_heat_map command-line option.
◦ <exclude_error_classification/>
Specifies that error classification data is not be copied from the input database
to the output database. Equivalent to the -exclude_error_classification
command-line option.
• <output_pydb path="path/to/pydb" dbu="0.0001">
Designates an output PYDB database, just as the -o command-line option does on the
command line. The dbu parameter is optional and is equivalent to the -dbu command-
line option. The output_pydb option can be specified at the most, one time within a
single pydb_operation.
◦ <overwrite/>
Specifies that the given VUE file should be created which points to the new output
database. Equivalent to the -vue command line option. The optional template
parameter specifies an existing VUE file to use as a template.
• <output_layout file="output_layout_file" format="GDSII|OASIS">
Specifies that violations from the input PYDB are exported to the output layout
specified by the given file and format.
◦ <violation_map file="mapping_file" out="output_file"/>
Specifies a violation mapping file or output violation mapping file for this output
layout.
◦ <include_unmapped/>
If a violation mapping file is used, specifies that violations without an entry in the
violation mapping file are included in the output on arbitrarily assigned layers and
datatypes.
◦ <include_unmapped/>
Specifies the input layout from which the design hierarchy is read.
◦ <command_property number="property_number"/>
Specifies that the runset file:line:function is added to each polygon in the output
layout on the given property number. This might increase the size of the output
layout substantially.
◦ <host_init host="host1:#cores"/>
</classify_errors>
</input_pydb>
</pydb_operation>
</pydb>
layout, which was used for the IC Validator run. Layer and datatype numbers might be
specified by a violation mapping file, or they are assigned arbitrarily.
The violation mapping file is a CSV file with the following columns:
• Violation comment
Required. The full text of the violation or rule comment. Wildcard matching is
supported.
• Command
Optional. The runset file:line:function of a specific IC Validator command within the
given rule. Wildcard matching is supported. If this column is empty, all commands
within the violation or rule will be matched.
• Layer
Required. The layer number that is used to output matching violations.
• Datatype
Required. The datatype number that is used to output matching violations.
If a violation map file is given, only the rules and commands listed in the violation mapping
file are exported. To include rules and commands without a listing in the violation map file,
use the -include_unmapped command-line option.
If no input layout is given, the icv_pydb utility attempts to use the original layout path
stored with the PYDB to read the design hierarchy.
Example 45 Example With Selections and Creating a Reference Violation Mapping File With
the Layers and Datatypes Used
icv_pydb -i run_details/pydb/PYDB_AD4FUL \
-svc "Met1*" -select_cell "AD4FUL" \
-o errors.oas -format OASIS \
-violation_map_output violation_map_used.csv
Example 46 Example of Creating a Violation Map File With Arbitrarily Assigned Layer and
Datatype Numbers From a PYDB
icv_pydb -i run_details/pydb/PYDB_AD4FUL \
-violation_map_output violation_map.csv
...
Path: /global/cpydb1
Name: CPYDB1A
Path: /global/cpydb2
Name: CPYDB2
...
Renamed cPYDBs:
Old Path: /global/cpydb1
New Path: /new/path/cpydb1
Name: CPYDB1
13
Pattern Matching
Overview
IC Validator pattern matching compares two geometrical configurations (patterns) to
determine if they match. In pattern matching process, you match a target pattern from a
design layout with a reference pattern from an existing pattern library to determine if the
patterns match. Figure 78 illustrates this step.
The reference pattern represents any combination of geometric shapes that cannot be
manufactured, such as a lithography hotspot, or are difficult to be interpreted accurately
with standard DRC rules, such as a complex two-dimensional geometric measurements.
Reference patterns are pre-identified through robust verification and validation techniques,
and they are precisely specified and added to a pattern library before matching.
An advantage of pattern matching as a DRC-based flow is that it is much faster than the
simulation-based lithography detection, which is extremely computationally expensive and
time consuming. Therefore, pattern matching offers a cost-effective approach to identify
lithography-induced or process-induced hotspots earlier in the design flow, and it makes
real-time verification and repairing possible.
During pattern library creation, pattern information is extracted from the source pattern and
written to a pattern library. The pattern library is called during the pattern matching step as
a reference for identifying the same or similar patterns in the input design layout.
The pattern_learn() function creates a pattern library. The pattern match() function
performs pattern matching. See Chapter 1, “Tables of Runset Functions” in the IC
Validator Reference Manual for a summary of all of the pattern matching functions.
Required Input
Three user-defined inputs are required for creating a pattern library. These inputs are
part of the input layout, or they are generated through runset operations before the
pattern_learn() function call. Figure 80 shows an example of required input.
• Pattern layers. Specifies that a pattern contains single or multiple pattern layers, such
as a pattern consisting of metal and via layers.
• Pattern marker. Specifies a rectangular polygon that is placed on each source pattern
to mark the location of interest, such as the pinch or bridge location, or a DRC violation.
The pattern marker is output during pattern matching to report the match between the
pattern library and the design.
• Pattern extent. Specifies the bounding region of a pattern. Pattern layers in this
region are processed and registered to the pattern library. The pattern extent can
also be generated through the ambit_mode and match_ambit arguments of the
pattern_learn() function.
Optional Input
You might also need to provide the following specifications for your pattern definition.
Figure 81 shows an example of optional input.
• Edge tolerance layers. Specifies a set of polygon layers that define the allowed edge
placement variation for each pattern layer edge.
• Ignore region layers. Contains one or more rectilinear polygons defining areas on the
source pattern that are excluded from the matching process.
• Optional pattern marker layers. Contains polygons that can be attached to a pattern
and retrieved for certain post-pattern matching operations. For example, it can be user-
defined fix guidance for a particular pattern.
• Pattern text ID layer. Contains the text ID of the source patterns. If the text ID layer is
not provided, a tool-generated text ID is used.
• Pattern text property layers. Contains the text information defined for a
source pattern, for example, the pattern ID, pattern type, or minimum CD.
You attach these properties to the pattern during pattern library creation and
retrieve them by using the select_marker_by_double_property() and
select_marker_by_string_property() functions for post-pattern matching
operations.
Note:
The pattern library path specified by the pattern_library_path argument
must exist before pattern library creation. Otherwise a runtime error occurs.
6. Layer assignments
metal : polygon_layer = assign ( {{3,0}} );
pattern_marker : polygon_layer = assign ( {{4,0}, {5,0}} );
pattern_text_id : text_layer = assign_text( {{6,0}} );
error_list : list of write_error_map_s = {};
matching mode. The pattern extent is 0.4 x 0.4 μm and is created by the tool from
the center of the pattern marker with 0.2 μm match_abmit.
tmp_violation @= {
@"pattern_learn result";
pattern_learn (
pattern_library_name = $PM_LIB_NAME,
pattern_layers = {metal},
pattern_marker = pattern_marker,
pattern_text_id = pattern_text_id,
match_ambit = match_ambit,
ambit_mode = ambit_mode,
pattern_fuzziness = pattern_fuzziness,
uniform_fuzzy_size = uniform_fuzzy_size,
pattern_reflect = pattern_reflect,
pattern_rotate = pattern_rotate
);
}
pattern_learn result:pattern_1
pattern_learn ................................ 4 violations found.
pattern_learn result:pattern_5
pattern_learn ................................ 2 violations found.
In this report, six source patterns are processed by the pattern_learn() function.
Two of the six patterns are identified as pattern_5 and four are identified as pattern_1.
Therefore, two unique patterns are captured in the pattern library.
The pattern_learn() function also identifies one pattern as an invalid
two-dimensional pattern because it has no corner within the pattern extent.
Therefore, it cannot be added to a two-dimensional pattern library. Check the
cell.LAYOUT_ERRORS file after pattern library creation to determine if there are invalid
patterns.
• Pattern Library
You can store more than one pattern library under the same pattern library path defined
by the pattern_library_path argument of the pattern_options() function. You call
one or multiple pattern libraries during pattern matching.
At the beginning of the log file, you see the parameters and their corresponding values
applied for generating this pattern library for the first time. To add new patterns to an
existing pattern library, use the same set of parameters and values. Otherwise an error
occurs.
At the end of the log file, you see the number of unique patterns that are registered for
each pattern_learn() function call.
The last line of the log file indicates that this pattern library is writable, that is, the new
patterns can be added to it. This pattern library can be converted to either the GDSII or
OASIS format for viewing, if it is viewable.
Options:
<pdb_path>.............pattern library path
<pdb_name>.............pattern library name
-read..................converting a pattern library to a graphical
output.
-lock..................adds the write and view protection to a pattern
library.
<output_name>..........optional, specifies the graphical output file
name, [default:pdb_view.gds]
This example converts the met pattern library, which is located in the /test/pattern_lib
directory, to the hs.gds output GDSII file.
ICV_INSTALL_DIRECTORY/contrib/pdb_utility.pl /test/pattern_lib met -read
hs.gds
Checking a Layout
This section shows an example of the pattern matching flow using the pattern_match()
function.
If you use a Milkyway library, set format to MILKYWAY and specify the library_path to
the parent directory of the Milkyway library.
4. General options
To store and report all of the violations generated by the pattern_match() function,
set error_limit_per_check to ERROR_LIMIT_MAX in the error_options() function.
The default is 100.
error_options (
error_limit_per_check = ERROR_LIMIT_MAX
;
The pattern_options() function defines the path of a pattern library, and it can be
called only one time in the runset. This function specifies the path you need to access a
single or multiple pattern libraries.
6. Layer assignments
MET1 : polygon_layer = assign ( {{15,0}} );
MET2 : polygon_layer = assign ( {{17,0}} );
MET3 : polygon_layer = assign ( {{19,0}} );
MET4 : polygon_layer = assign ( {{21,0}} );
MET5 : polygon_layer = assign ( {{23,0}} );
MET6 : polygon_layer = assign ( {{25,0}} );
MET7 : polygon_layer = assign ( {{27,0}} );
MET8 : polygon_layer = assign ( {{29,0}} );
MET9 : polygon_layer = assign ( {{31,0}} );
MET10 : polygon_layer = assign ( {{33,0}} );
In this example, MET1 through MET10 are assigned and subjected to a pattern
matching check.
7. Call the pattern_match() function to perform pattern matching.
You must call the pattern_match() function to perform pattern matching. This
function can be called multiple times within a runset. The following example shows
the pattern_match() function executing pattern matching on the metal layers of the
TOP.gds file.
for(i = 0 to strtoi($PM_TOP_MET)-1) {
layer_index : integer = i + 1;
tmp_violation @= {
@"METAL"+layer_index+" PM RESULT: ";
pattern_match (
pattern_library_name = $PM_LIB_NAME,
pattern_layers = {metal_layer_list[i]}
;
}
out_layer_index = layer_index;
metal_error_list.push_back({tmp_violation,
{out_layer_index,0}});
}
Optionally, you can write the output from the pattern_match() function to a GDSII or
OASIS file.
output_lib=gds_library("out.gds");
write_gds(
output_library=output_lib,
errors=metal_error_list
);
METAL3 PM RESULT:
pattern_match ...............................69 violations found.
METAL4 PM RESULT:
pattern_match ................................2 violations found.
ERROR DETAILS
--------------------------------------------------------------------
METAL2 PM RESULT:
-------------------------------------------------------------------
/match.rs:38:pattern_match
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Structure (lower left x, y ) (upper right x, y ) Pattern Orientation
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TOP (8.1600, 528.2500) (8.4000, 528.7200) pattern_78 R90
TOP (2.1400, 528.6700) (2.3800, 529.1400) pattern_57 R90
TOP (100.1450, 529.3000) (100.3850, 529.7700) pattern_107 R90FX
TOP (100.1450, 529.2300) (100.3850, 529.7000) pattern_108 R90
TOP (107.9800, 531.7500) (108.2200, 532.2200) pattern_80 R0
TOP (83.7900, 534.7600) (84.0300, 535.2300) pattern_7 R180
The error summary section contains the number of matched patterns on each metal layer
produced by the pattern_match() function. The error details section lists the coordinates
of the pattern marker and pattern ID of each matched pattern.
You can review pattern matching results in VUE, just as you do for other DRC violations.
or your own yield detractor pattern library to search a design for potential manufacturing
related hotspots to perform automatic fixing and revalidation for all errors. This improves
the overall manufacturability of the design.
You run IC Validator pattern matching within the IC Compiler tool using the same
set of IC Compiler commands used to perform signoff DRC checking. You use the
signoff_drc command to do pattern matching and the signoff_autofix_drc command
to automatically fix the violations (hotspots) detected by the signoff_drc command. See
the IC Compiler Implementation User Guide, which is available on SolvNetPlus, or the
IC Compiler man pages for detailed command syntax and usage.
The following sections describe how to detect and correct hotspots using the signoff DRC
commands in the IC Compiler tool (not applicable in the IC Compiler II tool).
You use the optional_markers() function to retrieve the fix guidance and the
milkyway_route_directives() function to pass the fixing guidance to the router.
3. Run ADR within the IC Compiler tool for hotspot detection and fixing.
Write an IC Compiler TCL script to execute ADR in the icc_shell, or use the
IC Compiler GUI to execute the commands.
The IC Compiler tool starts with user-defined fix guidance based on a defined order. In
Figure 83, you see the corrected results of pattern_122 by the user-defined guidance
and the default ADR flow, respectively.
14
Pattern Library Manager
Getting Started
The IC Validator Pattern Library Manager (PLM) is a graphical user interface (GUI)
that provides an alternate way to build a pattern library for an IC Validator pattern
matching run instead of using the batch mode for pattern library generation with the
pattern_learn()function.
Launch the Pattern Library Manager either at the shell command or in the IC WorkBench
EV Plus layout editor.
1. The command-line syntax is
icv_plm
Note:
If you launch the Pattern Library Manager and the IC WorkBench EV
Plus tool separately from the command line, both of them must be in the
same directory path (on the same machine) to establish a connection.
For information about how to connect the Pattern Library Manager to IC
WorkBench EV Plus, see the Communicating With IC WorkBench EV Plus
section.
2. To access the Pattern Library Manager from IC WorkBench EV Plus, source the IC
Validator plm_button.tcl file in the command window.
% source $env(ICV_INSTALL_DIRECTORY)/contrib/plm_button.tcl
The ConnectPlm button is added to the toolbar. Click this button to launch the Pattern
Library Manager and establish a connection, as shown in Figure 84.
Note:
You can also put this plm_button.tcl file in your IC WorkBench EV Plus
startup file to avoid setting up the connection each time.
The IC WorkBench EV Plus tool connects to Pattern Library Manager automatically when
you set the ICWBEV_USER environment variable:
For M-2016.03-SP2 and newer versions of IC WorkBench EV Plus:
setenv ICWBEV_USER ICV_PLM
A yellow button is added to the toolbar. Click this button to start the Pattern Library
Manager and establish a connection.
This command returns a number. This is the open socket that IC WorkBench EV Plus
established.
2. In the Pattern Library Manager, choose Tools > Communication. Enter the port number
and click OK.
Menu Description
File Creates, opens, or saves a pattern library. Imports a Pattern Database format
pattern library, and exports a Pattern Library Manager format pattern library
Help Displays information about the interface language and the user guide
Option Description
Import Reads in a pattern database format library to the Pattern Library Manager
7. You can now add patterns to the new pattern library. As you add new patterns, each of
them will appear under the pattern library you have created.
Command Description
New Pattern By Layer Extracts multiple patterns from the selected area of a layout
Command Description
Note:
By default, polygons outside of the pattern extent are not captured in the
generated pattern. To maintain the surrounding area outside the pattern
extent for further modification before the pattern is finalized, choose Keep
Context before clicking OK.
5. Draw a clipping window in IC WorkBench EV Plus to define the pattern extent. A new
layout window appears. Each polygon edge is labeled with a number starting from 0.
Edge labels are created as text layers under the layer list. A new pattern is added to
the pattern library, as shown in Figure 90.
Note:
The only allowed operation in IC WorkBench EV Plus after clicking OK in
step 4 is to draw a clipping window to define the pattern extent.
6. Define pattern properties in the Pattern Property tab if needed. For more information,
see the Attaching Pattern Properties to a Pattern section.
7. Add edge constraints in the Tolerance tab if needed. For more information, see the
Adding an Edge Tolerance to a Pattern section.
8. Repeat the previous steps to generate additional new patterns if there are multiple
locations in the same layout. Upon completion, choose File > Save in the Pattern
Library Manager to save the changes to your pattern library file.
A pattern library with one or multiple patterns from the layout is generated.
Note:
By default, polygons outside of the pattern extent are not included in the
generated pattern. To keep the surrounding area outside of the pattern
extent, choose Keep Context before clicking OK.
5. Draw a clipping window in IC WorkBench EV Plus to select the part of the layout for
generating new patterns. Patterns are extracted and listed under the currently open or
selected pattern library, as shown in Figure 92.
Note:
The only allowed operation in IC WorkBench EV Plus after clicking OK
in step 4 is to draw a clipping window to define the area of the layout for
pattern extraction.
6. Define pattern properties in the Pattern Property tab if needed. For more information,
see the Attaching Pattern Properties to a Pattern section.
7. Add edge constraints in the Tolerance tab if needed. For complete information, see the
Adding an Edge Tolerance to a Pattern section.
8. Upon completion, choose File > Save in the Pattern Library Manager to save the
changes to your pattern library file.
A pattern library with multiple patterns from the layout is generated, as shown in Figure 93.
5. For more information about layout creation and editing, see the IC WorkBench EV Plus
User Guide, which can be found on the Help menu. See Figure 97.
6. Click PlmRefresh in IC WorkBench EV Plus to update the pattern library with all of the
layers created for this pattern. A dialog box appears in the Pattern Library Manager, as
shown in Figure 98. Click Yes.
7. Define a pattern property in the Pattern Property tab if needed. See the Attaching
Pattern Properties to a Pattern section for more information.
8. Add edge constraints in the Tolerance tab if needed. See the Adding an Edge
Tolerance to a Pattern section for more information.
9. Repeat the previous steps to generate additional patterns under the same pattern
library file. Upon completion, choose File > Save in the Pattern Library Manager to
save the changes to your pattern library file.
Editing a Pattern
You can edit a pattern inside of the Pattern Library Manager by attaching one or more text
properties to this pattern, or by specifying an edge tolerance for non-uniform matching
mode.
You can also edit a pattern layout in IC WorkBench EV Plus by adding, removing, or
moving polygons, edges, and vertices.
5. Click Add to add the tolerance to the pattern library. Edge tolerances appear on layer
(9001:0) in IC WorkBench EV Plus.
6. Choose File > Save to save changes to the pattern library.
7. Click update to modify an existing edge tolerance and Delete to remove an edge
tolerance.
8. You can also specify edge-to-edge tolerance in the Edge2Edge pane by selecting two
edges, as shown in Figure 101.
Note:
For the edge-to-edge constraint to be recognized, at least one of the two
selected edges must be defined with an edge tolerance.
3. Add a property name and value to the String Property field if the property is a
string type, or to the Double Property field if it is a double type. For the example in
Figure 103, the hs_type property is a string type, and it is assigned a value of space.
The min_cd property is a double type, and it is assigned a value of 0.056.
3. Edit this pattern accordingly. For example, if you want to remove the sliver in pat_2 as
shown in the Figure 104, perform the following steps:
a. To select the shape, choose Tool > Select, or click the Select button on the Tools
toolbar and click the shape.
b. To remove the selected shape from pat_2, choose Edit > Delete or click the Delete
key on the keyboard.
c. Click PlmRefresh in IC WorkBench EV Plus to update the change made to pat_2. A
message appears, as shown in Figure 105. Click Yes.
4. In the Pattern Library Manager, choose File > Save to save the pattern library, as
shown in Figure 106.
consists two parts: An OASIS directory to store the graphical information and a text file
in XML format to store edge tolerances and edge-to-edge constraints. A pattern library
created using the pattern_learn() function by running an IC Validator job consists of
three parts: a binary file named pattern.dat, a text file named log to record pattern library
creation parameters, and pattern statistics, and an optional text file in XML format to store
edge tolerances and edge-to-edge constraints when the pattern matching mode is set to
PM_EDGE_NONUNIFORM.The pattern_match() function executes pattern matching in an
IC Validator run. The pattern library must be ready before a pattern matching run, and it
must be in the binary format, which is also addressed as a pattern database (PDB) format.
Table 61 shows a summary comparison of these two types of pattern library formats.
Table 61 Summary Comparison of Pattern Library Formats
Pattern library created Pattern Database format includes: Must be imported in the Pattern
using pattern_learn() 1. pattern.dat (binary) Library Manager for editing
function 2. log
3. optional XML file to store edge
tolerance when matching mode is
PM_EDGE_NONUNIFORM
Pattern library created in Pattern Library Manager format Must be exported from the
Pattern Library Manager includes: Pattern Library Manager to
1. OASIS to store graphical information use the pattern library in an IC
2. XML file to store edge and Validator pattern matching run
edge2edge tolerance
Note:
If the pattern library in a Pattern Database format is locked (encrypted), the
import fails.
3. Click OK. The pattern library is converted to a Pattern Library Manager format and
uploaded in the Pattern Library Manager.
4. Click OK. The pattern library is converted to the Pattern Database format and saved on
the disk with the name and path defined in step 3. An Export Result window appears,
as shown in Figure 109. Click OK to close this window and complete the exporting
process.
a. Browse to the location of the input layout. Select the design format. Enter the top-
cell name. Assign the pattern layers.
b. You can specify the IC Validator command-line option in the Run Options column.
See the IC Validator Basics chapter of the IC Validator User Guide for more
information about command-line options.
c. Specify the path to the output directory.
d. Click More to show additional options. Select the group_errors_by_pattern
check box if you want to group the results based on the pattern name.
e. Click OK to execute pattern matching on the specified layout.
3. Pattern matching results appear in a dialog box, as shown in Figure 112. Detailed
information is saved in the output directory. Click OK to close this dialog box.
A
IC Validator Architecture
Basic Architecture
Figure 113 shows the basic IC Validator architecture.
B
LVL Utility
This appendix discusses the Layout-Versus-Layout (LVL) utility and the QuickLVL utility
that are available with the IC Validator tool.
Description
The LVL utility allows you to
• Generate an assign file from a single library.
• Compare two layouts of a design using either an XOR-based comparison or a
NOT-based comparison. The errors that are found can be viewed in VUE using the
cell1_vs_cell2.vue file. See the IC Validator VUE User Guide for more information.
Note:
The LVL utility supports text with any text character, except the NULL
character.
For the NOT-based comparison,
◦ The baseline layer is compared using a NOT operation with the check library layer.
◦ The check library layer is compared using a NOT operation with the baseline layer.
The errors from each NOT operation are shown separately in the
cell1_vs_cell2.LVL_ERRORS file.
The LVL utility supports input in the following layout formats:
LVL utilitylayout formats supported
• GDSII
• Milkyway
• OASIS
• OpenAccess
• NDM
Note:
The LVL utility supports text with any text character, except the NULL character.
Command-Line Options
The LVL utility command-line syntax is
Table 62 describes the basic command-line options and the command-line options
available for distributed processing with the LVL utility.
Syntax Description
-af filename Name of the file where the assign layers in the library are stored.
This file is generated by the LVL utility, and it has the same syntax
as an IC Validator runset ASSIGN section. Include a path if you want
the file in a directory that is not in the current directory.
6
-c cell Cell name of both libraries, library1 and library2. The cell name is
also needed for generating an assign file with the -af option.
6
-c1 cell1 Cell name of the first library, library1.
6
-c2 cell2 Cell name of the second library, library2.
-elpc num Sets the error limit for each type of check. The default is 100.
-error_limit_per_check num Note:
Setting this command-line option to ERROR_LIMIT_MAX causes
all errors up to a tool-defined maximum limit to be stored. The
limit can be set above the tool-defined maximum limit by giving a
specific number. However, setting the limit above the tool-defined
maximum might result in decreased performance and increased
disk usage for large numbers of errors, and therefore is not
recommended. Setting it to 0 suppresses storage of all errors.
-flatten
-flatten command-line option, LVL utility Flattens all output to the top cell.
-host_init LSF Acquires hosts through an LSF batch queuing system by reading
either the LSB_HOSTS or LSB_MCPU_HOSTS environment
variables. The priority of the remote login protocol used to log in
to remote hosts is described in the -host_login command-line
option. See the -host_init LSF IC Validator command-line option
in Chapter 1, IC Validator Basics for more information.
6. If the libraries have the same cell name, provide the cell name with the -c command-line option. If the libraries
have different cell names, you must provide both cell names using the -c1 and -c2 command-line options.
Syntax Description
-host_init SGE Acquires hosts through an SGE batch queuing system by reading
the PE_HOSTFILE environment variable. The priority of the remote
login protocol used to log in to remote hosts is described in the
-host_login command-line option. See the -host_init SGE
IC Validator command-line option in IC Validator Basics” for more
information.
-host_disk Specifies a local group directory for all hosts specified with
-host_init or -host_add. See the -host_disk IC Validator
command-line option in IC Validator Basics for more information.
-host_memory Memory_Value Attempts to maintain the user-specified target memory for the hosts
M|G|T used in the IC Validator job by rescheduling and threading. This
switch cannot be used to reduce the memory of a single command.
See the -host_memory IC Validator command-line option in IC
Validator Basics for more information.
-ignore_text_case Instructs the utility to ignore case when comparing text. The default
behavior is that text comparison considers case.
-lf filename Layer file for both libraries that is used for layer mapping, layer
-layer_file filename merging, and layer naming. It must contain a valid PXL runset
7
ASSIGN section. Include the path if the library is not in the current
directory. See Layer Files on page 389 for more information.
7
-lf1 filename Layer file for the first library. Always use this command-line option
-layer_file1 filename along with -lf2. Include the path if the library is not in the current
directory. See Layer Files on page 389 for more information.
7
-lf2 filename Layer file for the second library. Always use this command-line
-layer_file2 filename option along with -lf1. Include the path if the library is not in
the current directory. See Layer Files on page 389 for more
information.
7. The layer file can be a full runset. The LVL utility, however, ignores everything but the ASSIGN section.
Syntax Description
-oa_color_mapping option Specifies how to use the locked and unlocked anchor bit for the
OpenAccess color layer mapping. The selections for option are
• ignore. Uses the color mappings for which the locked or unlocked
keyword is not specified, and ignores the color mappings for which
locked or unlocked is specified.
• strict. Uses the color mappings for which the locked or unlocked
keyword is specified, and ignores the color mappings for which
locked or unlocked is not specified.
• full. Sets all shapes that are read with color to locked. The
mapping for this setting is the same as the mapping for the
compatibility setting.
• compatibility. Follows the industry standard. Ignores color
mappings for which the colorAnnotate keyword is located after the
color specification. When no color mapping rule matches a color
shape, the mapping is used without color.
-oa_color_mapping1 Specifies how to use the locked and unlocked anchor bit for the
ignore | strict | full | OpenAccess color layer mapping form, OpenAccess library1.
compatibility
-oa_color_mapping2 Specifies how to use the locked and unlocked anchor bit for the
ignore | strict | full | OpenAccess color layer mapping form, OpenAccess library2.
compatibility
-rsi Reports input library information. Supports only GDSII and OASIS
input formats. Reports
• GDSII last modified and last accessed
• md5sum computed with md5sum -b file
• format of the input library
• path to the input library
• command used for the library input
Enumerators must be in a comma-separated list, and
the entire list must be within quotation marks. See the
report_streamfile_information argument of the function.
Note:
The -rsi command-line option overrides its runset setting in the
run_options() function.
Syntax Description
library1 Baseline library name. Include the path if the library is not in the
current directory. When one library is specified with the -af option,
the LVL utility generates an assign file.
You cannot generate an assign file and compare two layouts in one
run of the LVL utility.
The following examples show how to specify input layouts for various
formats:
• GDSII. /path/to/file.gds
• OASIS. /path/to/file.oas
• Milkyway. /library/path/LIBRARY_NAME
• OpenAccess. /path/to/lib.defs/LIBRARY_NAME
• NDM. /library/path/LIBRARY_NAME
library2 Check library name. Include the path if the library is not in the current
directory. Do not specify this file when using the -af command-line
option.
You cannot generate an assign file and compare two layouts in one
run of the LVL utility.
The following examples show how to specify input layouts for various
formats:
• GDSII. /path/to/file.gds
• OASIS. /path/to/file.oas
• Milkyway. /library/path/LIBRARY_NAME
• OpenAccess. /path/to/lib.defs/LIBRARY_NAME
• NDM. /library/path/LIBRARY_NAME
-mf filename Specifies the layer mapping file applied to both libraries. See
-map_file filename Mapping Layers on page 390 for more information.
-mf1 filename Specifies the layer mapping file applied to the first library. See
-map_file1 filename Mapping Layers on page 390 for more information.
-mf2 filename Specifies the layer mapping file applied to the second library. See
-map_file2 filename Mapping Layers on page 390 for more information.
6 8
-ndl label Design label for both libraries, library1 and library2.
-ndm_design_label
6
-ndl1 label Design label for the first library, library1.
-ndm_design_label1
6
-ndl2 label Design label for the second library, library2.
-ndm_design_label2
8. If the libraries have the same cell name, provide the cell name with the -c command-line option. If the libraries
have different cell names, you must provide both cell names using the -c1 and -c2 command-line options.
Syntax Description
-oa_dm4 Uses OpenAccess shared libraries that are compatible with DM4
plug-ins.
-oa_dm5 Uses OpenAccess shared libraries that are compatible with DM5
plug-ins. The default is -oa_dm5.
-quick Performs cell-level Quick LVL. See QuickLVL Utility on page 398
for more information.
Note:
The Quick LVL (-quick) utility works only with a single host. If
multiple hosts are acquired from the batch queuing systems,
Quick LVL errors out.
-rt ALL | POINT Specifies how point touches are treated. This option is used with the
-report_touch ALL | POINT -outside option.
• ALL. Treat polygon line touches and edge point touches as
outside.
• POINT. Treat point touches as outside.
-sel "layer1[-layer1max] Specifies the edge layers and datatypes that are used in the
[,dtype1[-dtype1max]] [...]" comparison. Ranges can be used. If a datatype is not specified, it
9
-select_edge_layers defaults to all datatypes.
"layer1[-layer1max]
[,dtype1[-dtype1max]] [...]"
-sn layer_name1 ... Specifies, by name, the layers on which the utility is run.
-select_names
layer_name1 ...
-spl "layer1[-layer1max] Specifies the polygon layers and datatypes that are used in the
[,dtype1[-dtype1max]] [...]" comparison. Ranges can be used. If a datatype is not specified, it
9
-select_polygon_layers defaults to all datatypes.
"layer1[-layer1max]
[,dtype1[-dtype1max]] [...]"
9. If you are comparing two OpenAccess libraries, you can use the layer name and purpose name instead of layer
number and datatype number.
Syntax Description
-split_output Specifies that when used with the -not command-line option, the
LVL utility writes not12 results to Cell1_vs_Cell2_not12 and writes
not21 results to Cell1_vs_Cell2_not21.
The not12 notation corresponds to the results of (BaselineLayer
not Checklayer) for each layer in the baseline library (the first library
listed on the icv_lvl command line) and each corresponding layer
in the check library (the second library on the icv_lvl command
line). Likewise, the not21 notation corresponds to the results of
(CheckLayer not BaselineLayer) for each compared layer.
-stl "layer1[-layer1max] Specifies the text layers and datatypes that are used in the
[,dtype1[-dtype1max]] [...]" comparison. Ranges can be used. If a datatype is not specified, it
9 10
-select_text_layers defaults to all datatypes.
"layer1[-layer1max]
[,dtype1[-dtype1max]] [...]"
-text Instructs the utility to compare text between two layouts. See
Handling of Text on page 393 for more information. When this
command-line option is not used, only geometry is compared, not
text.
-top_cell_only Compares only the data in the top cells from each library. All
lower-level cell data is dropped.
-uel "layer1[-layer1max] Specifies the edge layers and datatypes that are excluded from the
[,dtype1[-dtype1max]] [...]" comparison. Ranges can be used. If a datatype is not specified, it
9
-unselect_edge_layers defaults to all datatypes.
"layer1[-layer1max]
[,dtype1[-dtype1max]] [...]"
-un layer_name1 ... Specifies by name the layers that are excluded when the utility is
-unselect_names run.
layer_name1 ...
-upl "layer1[-layer1max] Specifies the polygon layers and datatypes that are excluded from
[,dtype1[-dtype1max]] [...]" the comparison. Ranges can be used. If a datatype is not specified, it
9
-unselect_polygon_layers defaults to all datatypes.
"layer1[-layer1max]
[,dtype1[-dtype1max]] [...]"
10. Text matches only when both location and text string match. Text differences appear as an X in VUE. Text
comparison is only a cell-level operation. For cases of hierarchy mismatch, hierarchy and layout are compared
first followed by text. If the hierarchy and layout do not compare, the text cannot be accurately compared. In
general, if two cells do not have the same name, size, and placement, their text will not be compared.
Syntax Description
-utl "layer1[-layer1max] Specifies the text layers and datatypes that are excluded from the
[,dtype1[-dtype1max]] [...]" comparison. Ranges can be used. If a datatype is not specified, it
9 10
-unselect_text_layers defaults to all datatypes.
"layer1[-layer1max]
[,dtype1[-dtype1max]] [...]"
-transform [RX,] Indicates that the top cell of the check layout is transformed as
[0|90|180|270|DX,DY] described on the command line. RX: Indicates reflection around
X-axis 0|90|180|270; counterclockwise rotation angle dx dy;
translation in X and Y directions, in microns. See the following
examples:
• -transform 90,0,0
• -transform RX,90,1,-1.2
• -transform RX
See QuickLVL Utility on page 398 for more information.
baseline_layout Baseline library name. Include the path if the library is not in the
current directory.
The following examples show how to specify input layouts for each
format:
• OASIS. /path/to/file.oas
• GDSII. /path/to/file.gds
See QuickLVL Utility on page 398 for more information.
check_layout Check library name. Include the path if the library is not in the current
directory.
The following examples show how to specify input layouts for each
format:
• OASIS. /path/to/file.oas
• GDSII. /path/to/file.gds
See QuickLVL Utility on page 398 for more information.
• -oalm1 filename
-oa_layer_map1 filename
• -oalm2 filename
-oa_layer_map2 filename
• -oacm cellmap
-oa_cell_map cellmap
• -oacm1 cellmap1
-oa_cell_map1 cellmap1
• -oacm2 cellmap2
-oa_cell_map2 cellmap2
• -oav viewname
-oa_view viewname
• -oav1 viewname
-oa_view1 viewname
• -oav2 viewname
-oa_view2 viewname
If you are comparing an OpenAccess library to another OpenAccess library, then you do
not need to specify a layer mapping file. The LVL utility matches the layer/purpose pairs
and runs the comparison. However, if you are comparing an OpenAccess library to a
GDSII, OASIS, Milkyway, or NDM library, then a layer mapping file is needed to map the
OpenAccess layer/purpose pair to a layer/datatype in the GDSII, Milkyway, Oasis, or NDM
library. For example,
% icv_lvl lib.defs/OA_LIB gds_lib.gds -c top -oalm layer_map.txt
If each input OA database has a unique layer mapping file, specify the databases
individually:
-oalm1 filename1
-oalm2 filename2
For more information, see the definition of the layer_mapping_file argument of the
openaccess_options() function in the IC Validator Reference Manual.
Cell Mapping
When you are comparing an OpenAccess database to a different format, you might need
to provide the cell mapping to prevent flattening. The OpenAccess library specifies unique
cells by a library/cell/view triplet. The IC Validator tool and the LVL utility must assign a
unique cell name to each unique library/cell/view triplet. The LVL utility flattens hierarchies
with mismatched cell names until the two hierarchies match. Using a cell mapping file can
ensure that cells are named consistently between the two libraries being compared, and
prevent this flattening.
For more information, see the definition of the cell_mapping_file argument of the
openaccess_options() function in the IC Validator Reference Manual.
If each input OpenAccess database has a unique mapping, specify the databases
individually:
-oacm1 cellmap1
-oacm2 cellmap2
If each input OpenAccess database has a different view name for the top cell, specify the
databases individually:
-oav1 viewname1
-oav2 viewname2
Layer Files
By default, all layers of the layout databases are compared. However, by using command-
line options, you can run the utility on a subset of these layers. You can also specify
mapping files to use instead of mapping files included internally in the Milkyway and NDM
libraries.
Layers are mapped before layers are selected of filtered using command-line options.
For example, if -lf layerfile and -mf mapfile are used at the same time, then layer
numbers and datatypes are first mapped using the mapfile file before interpreting the
layerfile file.
• The LVL utility stops with an error if a layer file contains multiple definitions of a layer
name.
In this example, layer A is defined twice. Therefore, the LVL utility reports an error. You
must remove one of the definitions.
A = assign({{1,0}});
B = assign({{2,0}});
A = assign({{10,0}});
Mapping Layers
The mapping file can be included internally as part of the Milkyway and NDM libraries.
Internal mapping files are used by default. Using the -mf, -mf1, and -mf2 command-line
options, you can map layers using an external file. When you specify a mapping file on the
command-line, it is used instead of the internal mapping file.
Note:
To map layers using the -mf, -mf1, and -mf2 command-line options on an
OpenAccess library, you must also use the -oalm, -oalm1, and -oalm2
command-line options to convert LayerName/Purpose string pairs to
LayerNumber/Datatype numbered pairs.
The mapping file can be in two of three formats: the Milkyway format, the NDM format, or
the IC Validator mapping file format. See the Milkyway Database Application Note and the
Library Data Preparation for IC Compiler User Guide, which are available on SolvNetPlus,
for information about the Milkyway and NDM formats. See Layer Mapping for information
layer mapping filemapping file, layer
The -mf2 command-line option allows you to map layer 2 data in the two.gds library to
layer 1 data. The result is that both libraries end up having their data on the same layer
and the LVL comparison is clean.
Alternatively, you could use layer files to accomplish the same thing. For example,
• file1 contains
Metal1 = assign({{1, 0}});
• file2 contains
Metal1 = assign({{2, 0}});
The first method, which uses mapping files, is easier if you already have IC Validator
layer mapping files for IC Validator runs that can be used with the LVL utility mapping file
command-line options.
Note:
You cannot use both the -sn and -un options in the same run of the utility.
You cannot use both the -spl and -upl options in the same run of the utility.
You cannot use both the -sel and -uel options in the same run of the utility.
You cannot use both the -stl and -utl options in the same run of the utility.
Comparing by Name
You can use command-line options to limit the polygon and edge layers being compared
when a layer file is provided using the -lf command-line option or using both the -lf1
and -lf2 command-line options. This filtering is referred to as filtering by name.
The command-line options used to limit by name the layers being compared are:
• -sn layer_name1 ...
-select_names layer_name1 ...
In this example, the LVL utility runs on the layers POLY, NWELL, and METAL1 layers:
-sn POLY NWELL METAL1
In this example, the LVL utility runs on all the layers except METAL2:
-un METAL2
Comparing by Number
You can use command-line options to limit the layers being compared when no layer file is
provided. This filtering is referred to as filtering by number. Polygon, edge, and text layers
are filtered separately.
The command-line options used to limit by number the layers being compared are:
• -sel "layer1[-layer1max] [,dtype1[-dtype1max]] [...]"
-select_edge_layers "layer1[-layer1max] [,dtype1[-dtype1max]] [...]"
In the following example, the LVL utility runs on the following polygon layers:
• 1 through 4, all datatypes
• 5, datatypes 0 through 7
• 6, all datatypes
• 7 and 8, datatypes 0 through 7
-spl 1-4 5,0-7 6 7-8,0-7
In the next example, the LVL utility runs on all polygon layers and datatypes except for
datatypes 5, 6, and 7 in polygon layers 1, 2, and 3.
-upl 1-3,5-7
In this example, the LVL utility runs on the following edge layers:
• 1 through 5, datatypes 0 through 7
• 6 through 8, datatypes 5 through 7
-sel 1-5,0-7 6-8,5-7
In the next example, the LVL utility runs on all the edge layers and datatypes except edge
layer 2 and datatypes 5 and 6 in edge layer 4:
-uel 2 4,5-6
Handling of Text
The -text command-line option instructs the LVL utility to compare text between two
layouts. If one or both databases have multiple text at the same coordinate, (x1,y1), the
utility matches as many text strings as possible at (x1,y1) between the two databases. The
errors are reported as follows:
• If there is only one unmatched text at (x1, y1) in each database:
Violations Found:TEXT_XOR
- - - - - - - - - - - - - - - - - - - - - - - - -
Structure (Position x, y) BaselineText CheckText
- - - - - - - - - - - - - - - - - - - - - - - - -
A (x1, y1) base_text check_text
• If there is one unmatched baseline text and no unmatched checked text, or vice versa:
Violations Found:TEXT_XOR
- - - - - - - - - - - - - - - - - - - - - - - - -
Structure (Position x, y) BaselineText CheckText
- - - - - - - - - - - - - - - - - - - - - - - - -
A (x1, y1) base_text
• If there are multiple unmatched text at (x1,y1) in the baseline database and at least one
unmatched text at (x1,y1) in the check library:
Violations Found:TEXT_XOR
- - - - - - - - - - - - - - - - - - - - - - - - -
Structure (Position x, y) BaselineText CheckText
- - - - - - - - - - - - - - - - - - - - - - - - -
A (x1, y1) base_text1 check_text1
A (x1, y1) base_text1 check_text2
A (x1, y1) base_text2 check_text1
A (x1, y1) base_text2 check_text2
A (x1, y1) base_text3 check_text1
A (x1, y1) base_text3 check_text2
In the preceding example, there are three unmatched baseline text and two unmatched
checked text. In general, all possible combinations of unmatched text are reported.
LVL Output
The LVL utility errors are saved in the files described in Error Files.
The output library has the same format as the baseline library. The default output cell is
from the baseline library. VUE overlays differences to the baseline library.
• If the input design files are in either the GDSII or OASIS format and LVL runs with the
layer file, differences are reported on layers that are listed in the LVL_ERRORS file,
which might be different than the baseline layer or check layout.
For example, a difference between baseline layer {31,0} and check layer {131, 0},
which is mapped to the same layer using the LVL command-line options, can be
reported in output layer {1, 23}.
• If the input design files are in either the GDSII or OASIS format and LVL runs with
the layer mapping file and without a layer file, differences are reported on a layer
determined by the name generated based on the mapping file.
For example, if the layer name is L10D15, the difference is reported in layer {10, 15}.
• If the input design files are in either the GDSII or OASIS format and LVL runs without
the layer mapping file or layer file, differences between the two layers are reported on
the same layer of the output library.
For example, a difference between baseline layer {31, 0} and check layer {31, 0} is
reported in output layer {31, 0}.
If one of the input files (for example, OpenAccess or Milkyway) is not in the GDSII
or OASIS format, the differences are reported on the layers that are listed in the
LVL_ERRORS file, which might be different from the baseline layer or check layout.
For example, a difference between baseline layer {31,0} and check layer {131, 0}, which is
mapped to the same layer using the LVL command-line options, can be reported in output
layer {1, 23}.
Exit Status
The exit status values of the icv_lvl executable are shown in Table 64.
Table 64 Exit Status Values
In this example, the icv_lvl executable ran successfully and found no differences:
% icv_lvl lib1 lib2 -c cell
% echo $?
0
Error Files
Table 65 describes the error files that are generated and stored in the current working
directory.
Note:
You can specify the maximum number of errors for each type of check using the
-elpc option. See Table 62.
The LVL utility also provides screen output that contains real-time results from each XOR
or NOT operation.
Table 65 Error Files
cell1_vs_cell2.LVL_RESULTS Results file that gives a brief description of each step of the run
and indicates whether there are any errors.
When there is an error, the results file points you to the specific
cell.RESULTS file that is located in the run_details directory
of the failed run. See the run_options() function in the IC
Validator Reference Manual for information about specifying the
cell.RESULTS file.
lvl_details/cell1_vs_cell2.lvl_sum Summary file containing the following information for each run
of the LVL utility:
• The icv_lvl executable command line.
• Indication of success or failure.
• Purpose of the run.
• Elapsed time and peak memory of the run.
cell1_vs_cell2.vue File with overlay differences to the baseline library. This file
allows you to view the errors using VUE.
Output Directories
Table 66 describes the directories that are created by the LVL utility. They are in the
current working directory.
Table 66 Output Directories
lvl_details/layer_list2/ Automatically generated layer list for the check library if a layer
list is not present. Summary information of layer list creation is
included for diagnostic purposes.
Use Models
This section shows two simple use models of the LVL utility.
• The following use model compares the baseline library with the check library using
two NOT operations. All of the layers in library1 are compared with all of the layers in
library2.
icv_lvl library1 library2 -c cell -not
• The following use model generates a layer file from the input library using cell as the
top cell. The output file contains an assign statement for each layer/datatype found in
the input library under the cell subtree.
icv_lvl library -af output_file -c cell
QuickLVL Utility
This section describes two use models in the Quick LVL utility allow you to quickly read
two layouts and compare the layout elements cell by cell.
• Cell-level Quick LVL: Standalone run of Quick LVL.
• Auto-Incremental DRC: The IC Validator tool calls Quick LVL first to determine
differences with respect to the baseline. Results are passed to the tool in an IC
Validator readable internal format without user intervention.
The Quick LVL utility supports input in the following layout formats:
LVL utilitylayout formats supported
• OASIS
• GDSII
Note:
The Quick LVL utility supports text with any text character, except the NULL
character.
Figure 114 shows the Quick LVL utility.
Placement Mismatch
Baseline shapes missing in Check ............... 20 violations found.
New shapes in Check not in Baseline ............ 150 violations found.
• OASIS
• GDSII
Figure 115 shows the Auto-Incremental DRC Flow.
C
Layout Integrity Management
This appendix describes the layout integrity management feature that is available with the
IC Validator tool.
new set of checksums can be merged into an existing LIDB so that any version of the cell
passes the layout integrity check.
There are three ways that cells in a design can be checked for layout integrity:
• Use the icv_lidb utility to validate an input layout against an existing LIDB without doing
a full IC Validator run. The utility matches layout cells to a set of checksums in the LIDB
by cell name.
• Use the layout_integrity_by_cell() function in an IC Validator runset to check
cells by name. A design cell is considered a match if its calculated checksums match
any set of checksums for a cell of the same name in any input LIDB specified in the
layout_integrity_options() function.
Basic Example
The following example shows basic layout integrity checking by cell name. First, create an
LIDB from the input layout.
Next, add the following to your runset, and the IC Validator tool runs as planned.
layout_integrity_options(
databases = {
{ db_name = "orig.lidb" }
}
);
{
@ "Layout Integrity Check";
layout_integrity_by_cell(
cells = {"*"}
);
}
For this run, no layout integrity mismatches are reported in the LAYOUT_ERRORS file
because you are running from the same input layout from which the LIDB is generated.
However, you can see that the layout integrity checking is performed by examining the
run_details/lidb.log file.
-------------------------------------------------------------------------
Layout Integrity Processing Log
11/08/2010 08:12:50
-------------------------------------------------------------------------
Also, the layout integrity status of each cell that was checked can be seen in the
run_details/hierarchy/cell.tree0 file.
CELL = AD4FUL
Cell Status = USED
GDSII Library = MAINLIB
Layout Integrity Check = Passed Completely
There are three possible values for the layout integrity check status in the cell.tree0 file.
• Passed Completely. The cell and all cells placed underneath it were checked and found
to be a match.
• Passed. The cell itself passed the layout integrity check, but at least one child cell was
not checked or failed the check.
• Failed. No layout integrity match was found for the cell.
runset1.rs:17:layout_integrity_by_cell:layer_mismatch
- - - - - - - - - - - - - - - - - - - - -
Structure Layer LIDB Golden Layout
- - - - - - - - - - - - - - - - - - - - -
AD4FUL (255;0) orig.lidb
VIA (255;0) orig.lidb
ADFULAH (255;0) orig.lidb
POLYHD (255;0) orig.lidb
INV (255;0) orig.lidb
INVH (255;0) orig.lidb
DGATE (255;0) orig.lidb
When running on an equivalent OASIS layout, layout integrity mismatches because of cell
placements being interpreted as AREFs are reported.
...
Layout Integrity Check
layout_integrity_by_cell:placement_mismatch .... 6 violations found.
...
runset1.rs:17:layout_integrity_by_cell:placement_mismatch
- - - - - - - - - - - - - - - - - - - - - -
Structure Child Cell LIDB Golden Layout
- - - - - - - - - - - - - - - - - - - - - -
AD4FUL ADFULAH orig.lidb
AD4FUL VIA orig.lidb
ADFULAH INV orig.lidb
ADFULAH INVH orig.lidb
DGATE TGATE orig.lidb
DGATE VIA orig.lidb
Because these three layouts are equivalent, they can be merged into a single LIDB. The
LIDB passes on all three formats.
icv_lidb -i EX_ADDER_3 EX_ADDER_3.GDS EX_ADDER_3.oas -o merged.lidb
In the preceding example, there is no complete match found for the cell TOP. Layer 1
matches cell TOP in the layout 1.gds, but does not match 2.gds, so the mismatch is
reported against the golden layout 2.gds. Layer 2 matches cell TOP in 2.gds, but does not
match 1.gds, so the mismatch is reported against the golden layout 1.gds. Layer 3 does
not match any golden layout, so that column is left blank.
Cell placement mismatches for the same scenario are shown in the following example.
validate_layout.rs:15:layout_integrity_by_cell:placement_mismatch
- - - - - - - - - - - - - - - - - - - - - - -
Structure Child Cell LIDB Golden Layout
- - - - - - - - - - - - - - - - - - - - - - -
TOP A merged.lidb LIB2
TOP B merged.lidb LIB1
TOP C merged.lidb
In the preceding example, no complete match found for the cell TOP. Placements of child
cell A match the placements in cell TOP of LIB1, but not LIB2, so the mismatch is reported
against LIB2. Placements of child cell B match the placements in cell TOP of LIB2 but not
LIB1, so the mismatch is reported against LIB1. Placements of child cell C do not match
any golden layout, so that column is left blank.
Hierarchical Mode
The IC Validator tool has two modes of operation for layout integrity checking:
• Cell level. Each cell is considered individually without regard to whether any children in
the hierarchy passed or failed the check. This is the default mode.
• Hierarchical. The status of child cells in the hierarchy is considered. Any layout
integrity mismatch of a child cell is reported also as a placement mismatch in the
parent cell. This reporting continues all the way up the hierarchy.
Mismatches are only reported in cells specified in the check function:
• layout_integrity_by_cell(). The cells are specified by the cells argument.
In this example, there is a difference in a layer shape in cell A from the checksum in the
layout integrity database.
Note:
Although the example uses placed cells with matching cell names, this is not
required in hierarchical mode. Cells placed under the check cell must have the
same content but not the same cell name.
The following code checks all cells in hierarchical mode:
layout_integrity_by_cell(
cells = { "*" },
processing_mode = HIERARCHICAL
);
The difference is reported in A, C, and TOP. The output in the LAYOUT_ERRORS file is:
test.rs:15:layout_integrity_by_cell:layer_mismatch
- - - - - - - - - - - - - - - - - - - -
Structure Layer LIDB Golden Layout
- - - - - - - - - - - - - - - - - - - -
A (31;0) test.lidb
test.rs:15:layout_integrity_by_cell:placement_mismatch
- - - - - - - - - - - - - - - - - - - - - -
Structure Child Cell LIDB Golden Layout
- - - - - - - - - - - - - - - - - - - - - -
TOP A test.lidb
TOP C test.lidb
C A test.lidb
The following code checks cells C and TOP, but not cell A:
layout_integrity_by_cell(
cells = { "*", "!A" },
processing_mode = HIERARCHICAL
);
The difference is still reported in C and TOP. The hierarchical mode specifies that
everything below a particular cell being checked is also be considered. The output is:
test.rs:15:layout_integrity_by_cell:placement_mismatch
- - - - - - - - - - - - - - - - - - - - - -
Structure Child Cell LIDB Golden Layout
- - - - - - - - - - - - - - - - - - - - - -
TOP A test.lidb
TOP C test.lidb
C A test.lidb
The run_details/lidb.log file contains additional details that indicate the status of each cell
in the hierarchy. In this example, cell A is processed, but there is no "Cell check" entry
because cell A was not specified in the layout_integrity_by_cell() function call.
-----------------------------------------------------------------------
Layout Integrity Processing Log
04/23/2014 06:13:36
-----------------------------------------------------------------------
Processing layout cell A
Database: test.lidb
All Layers: No hierarchically equivalent cells found
In this example, only cells M1-M6 have data on the marker layer. Cell B has a difference
on layer 3. Cell M5 has a difference on layer 4.
Note:
Although the example uses placed cells with matching cell names, this is not
required in hierarchical mode. Cells placed under the check cell must have the
same content but not the same cell name.
The following shows several hierarchical mode checks:
// This reports mismatches in M1, M2, M4, and M5
layout_integrity_by_marker_layer(
marker_layer = {1},
processing_mode = HIERARCHICAL
);
test2.rs:22:layout_integrity_by_marker_layer:cell_mismatch
- - - - - - - - - - -
Structure Marker Layer
- - - - - - - - - - -
M2 (1;0)
M5 (1;0)
test2.rs:28:layout_integrity_by_marker_layer:cell_mismatch
- - - - - - - - - - -
Structure Marker Layer
- - - - - - - - - - -
M1 (1;0)
M4 (1;0)
Details on the status of each cell are reported in the run_details/lidb.log file:
Processing layout cell C
Database: test.lidb
All Layers: Hierarchically equivalent cells found
Layers (3;0): Hierarchically equivalent cells found
Layers (4;0) (2;0): Hierarchically equivalent cells found
Layers (2;0): Hierarchically equivalent cells found
Marker Layer Check:
Result = No Data on Marker Layer
Database = test.lidb
This example checks cells with data on layer 8 as the marker layer. The following example
shows a matching cell and a mismatched cell in the run_details/lidb.log file.
-------------------------------------------------------------------------
Layout Integrity Processing Log
11/08/2010 09:45:33
-------------------------------------------------------------------------
You can see from the cell.LAYOUT_ERRORS file that mismatches for the marker layer
check are reported on a pass or fail basis. No individual layer or cell reference mismatches
are reported.
...
runset8.rs:17:layout_integrity_by_marker_layer:cell_mismatch
- - - - - - - - - - -
Structure Marker Layer
- - - - - - - - - - -
AD4FUL (8;0)
VIA (8;0)
TGATE (8;0)
If you want to check only specific layers in the marked cells, use the check_layers
argument.
{
@ "Layout Integrity Check";
layout_integrity_by_marker_layer(
marker_layer = { 8,0 },
check_layers = { {==6}, {==8}, {==10} }
);
}
milkyway_merge_library_options(
libraries = {
{
library_name = "EX_ADDER_3",
replacement_libraries = {
{
file = "EX_ADDER_3.GDS",
format = GDSII,
layout_integrity = {
{ db_name = "gds.lidb" }
}
}
}
}
}
);
layout_integrity_options(
databases = {
{ db_name = "milkyway.lidb" }
}
);
{
@ "Layout Integrity Check";
layout_integrity_by_cell(
cells = {"*"}
);
}
After the IC Validator run, the run_details/lidb.log file shows that the Milkyway cell is
checked against the milkyway.lidb file, while the GDSII cells are checked against the
gds.lidb file.
-------------------------------------------------------------------------
Layout Integrity Processing Log
11/08/2010 09:56:43
-------------------------------------------------------------------------
Command-Line Options
The icv_lidb utility command-line syntax supports its three simple use models.
• To generate an LIDB from an input layout:
icv_lidb -i input_layout -o output_lidb [-c top_cell]
Syntax Description
-c cell The top cell of the input layout. If omitted, all cells in the library are
processed.
-elpc num Sets the error limit for each type of check. The default is 100.
-error_limit_per_check Note:
num
Setting this command-line option to ERROR_LIMIT_MAX causes
all errors up to a tool-defined maximum limit to be stored. The
limit can be set above the tool-defined maximum limit by giving a
specific number. However, setting the limit above the tool-defined
maximum might result in decreased performance and increased
disk usage for large numbers of errors, and therefore is not
recommended. Setting it to 0 suppresses storage of all errors.
Syntax Description
-i input_layout ... Input layout name when creating an LIDB or validating a layout
input_layout against an existing LIDB. If creating an LIDB, you can specify multiple
files to incorporate checksums from all layouts into the output LIDB.
The following examples show how to specify input layouts for various
formats:
• GDSII: icv_lidb -i /path/to/file.gds
• OASIS: icv_lidb -i /path/to/file.oas
• Milkyway: icv_lidb -i /library/path/LIBRARY_NAME
• OpenAccess: icv_lidb -i /path/to/lib.defs/LIBRARY_NAME
-lf filename Selects a subset of all layers and object types (polygons, edges, and
text) to be written to the layout integrity database.
For example, edges on layer1 are written only to the layout integrity
database only if the layer file contains an assign_edge() function that
includes layer1. The layer file is stored in the layout integrity database
so that any validation runs against this database are restricted to the
layers and object types found in the database of the layer file.
The layer file must not contain options functions or runset functions.
If the file is empty or does not contain assign statements, this
command-line option is ignored and the IC Validator tool behaves as if
it is not present.
-merge lidb ... lidb Merges the specified input LIDB into the output database.
-o output_lidb The output layout integrity database (LIDB) that is generated. If the file
exists, it is overwritten.
-oa_view view_name Optional view of the top cell for OpenAccess databases.
Exit Status
The exit status values of the icv_lidb executable are shown in Table 68.
Table 68 icv_lidb Utility Exit Status Values
Output Files
The output files of the icv_lidb utility are shown in Table 69.
Table 69 icv_lidb Utility Output Files
lidb_details/icv_lidb.sum Summary file containing information about the run. All output
from the screen is captured in this file. It contains
• The command-line options used for the run.
• Information about input and output files.
• Runtime and memory used.
• Success or failure indication.
cell.LAYOUT_ERRORS Error file containing all layout integrity errors found during a
layout validation run. If no top cell is given, the file is named
cell.LAYOUT_ERRORS.
timestamp, and lists the cells with entries. Output is sent to the screen as well as the
icv_lidb_report.sum summary file in the current directory.
Command-Line Options
The icv_lidb_report utility command-line syntax is:
icv_lidb_report -i input_lidb
Syntax Description
Output Files
The output files of the icv_lidb_report utility are shown in Table 71.
Table 71 icv_lidb_report Utility Output Files
icv_lidb_report.sum Summary file containing information about the run. All output
from the screen is captured in this file. It contains
• The command-line options used for the run.
• Information about input and output files.
D
IC Validator PXL Debugger
The IC Validator PXL Debugger assists you in constructing a runset or part of a runset
by allowing you to look at the intermediate stages in each of the design rules or device
configuration functions you are writing.
command:
% icv -debug runset.rs [-debug_source file]
The -debug_source command-line option allows you to run the PXL Debugger in batch
mode. Specify a file written with the record command in a previous debug run. The output
of the PXL Debugger displays on screen.
When runset execution starts,
• A message on screen indicates the start of the PXL Debugger.
• The run pauses at the first breakpoint. Five lines before and four lines after the
executable line are displayed.
• The PXL Debugger prompt (ICVD>>) displays.
For example,
Starting ICV Debugger
13 | t = t + dtoi((x * y**2) / z);
14 | }
15 | t = dtoi(t / x);
16 | note (" f2(3) = "+ t);
17 | }
--> 18 | xx:integer;
19 | f2(3);
20 | note("end. ");
ICVD>>
At the prompt enter the action you want the PXL Debugger to take. See the following PXL
Debugger Commands section.
Note:
The PXL Debugger is designed to execute PXL code sequentially. If distributed
processing options are enabled, a warning is generated and the options are
ignored.
The PXL Debugger steps through code in runset and user functions, but not in
remote functions.
s[tep] Step into the function call. See the step command description.
n[ext] Function is executed all at once. See the next command description.
q[uit] Quit the PXL Debugger and the IC Validator run. See the quit command
description.
c[ont] Continue until the next breakpoint or the end of run. See the cont command
description.
fin[ish] Execute until the current function returns. See the finish command
description.
b[reak] Set a breakpoint. The default is at the current file:line. See the break
b[reak] command description.
[file:]line
b[reak] function
info b[reak] Show the breakpoint status. The default is all breakpoints. See the info
[number] break command description.
dis[able] Disable the breakpoint. The default is disable all breakpoints. See the
[number] disable command description.
en[able] [number] Enable the breakpoint. The default is enable all breakpoints. See the
enable command description.
d[elete] number Delete the specified breakpoint. See the delete command description.
l[ist] List the source. The default is the current file and line. See the list
l[ist] command description.
[file:]line
l[ist] function
where Show the function call stack. See the where command description.
pwd Prints the current working directory of the IC Validator PXL Debugger. For
example:
ICVD>> pwd
Working directory /remote/data/ICV
p[rint] Print the value of the expression. See the print command description.
expression
record [file] Save the commands from the current debug session. The default is
ICVpid.pdi. See the record command description.
source file Execute commands from the file. See the source command description.
break
The break command sets breakpoints. There are four ways to set breakpoints:
1. break
When specified without any options, break sets a breakpoint at the current line in the
current file.
4 |
5 | f2 : function (
6 | x : integer
7 | ) returning ans : integer
8 | {
--> 9 | ans = x;
10 | note("0. ans = " + ans);
11 | }
12 |
13 | xx: integer = 0;
ICVD>> b
Breakpoint #1, f2() at test.rs:9
ICVD>>
2. break line
Sets a breakpoint at a specific line number in the current file.
ICVD>> b 13
Breakpoint #1, f2() at test.rs:13
ICVD>>
3. break file:line
Sets a breakpoint at the specified line number in the specified file.
ICVD>> b test.rs:15
Breakpoint #2, _MAIN_() at test.rs:15
ICVD>>
4. break function
Sets the breakpoint in the first executable line within the specified function.
8 | {
9 | ans = x;
10 | note("0. ans = " + ans);
11 | }
12 |
--> 13 | xx: integer = 0;
14 | xx = f2(3);
15 | note("1. f2(3)= " + xx);
ICVD>> b f2
cont
The cont command executes the runset to completion or stops at a specified breakpoint.
In this example, the runset completed the debugger run.
1 | #include <math.rh>
2 | #include <diagnostics.rh>
3 |
--> 4 | f: list of integer = {1, 1};
5 |
6 | for (i in 2 thru 19) {
7 | f.push_back(f[i-2] + f[i-1]);
8 | }
ICVD>> c
In this example, execution stops at a breakpoint that is in a loop. (See the break
command description for information about the break command.)
ICVD>> b 13
Breakpoint #1, f2() at test.rs:13
ICVD>> c
8 | {
9 | t : integer = 0;
10 | y : integer = 10;
11 | z : integer = 100;
12 | for ( i=1 to x) {
B --> 13 | t = t + dtoi((x * y**2) / z);
14 | }
15 | t = dtoi(t / x);
16 | note (" f2(3) = "+ t);
17 | }
Breakpoint #1, f2() at test.rs:13
ICVD>> c
8 | {
9 | t : integer = 0;
10 | y : integer = 10;
11 | z : integer = 100;
12 | for ( i=1 to x) {
B --> 13 | t = t + dtoi((x * y**2) / z);
14 | }
15 | t = dtoi(t / x);
16 | note (" f2(3) = "+ t);
17 | }
Breakpoint #1, f2() at test.rs:13
ICVD>>
delete
The delete command deletes the specified breakpoint. The command format is
delete number
ICVD>> info b
Breakpoint #1, _MAIN_() at test.rs:7 Enabled
Breakpoint #2, _MAIN_() at test.rs:16 Enabled
ICVD>> d 1
ICVD>> info b
Breakpoint #2, _MAIN_() at test.rs:16 Enabled
ICVD>>
disable
The disable command disables the specified breakpoint. There are two ways to disable
breakpoints:
1. disable
Using the command without a breakpoint number disables all breakpoints.
ICVD>> info b
Breakpoint #1, _MAIN_() at test.rs:7 Enabled
Breakpoint #2, _MAIN_() at test.rs:18 Enabled
ICVD>> dis
ICVD>> info b
Breakpoint #1, _MAIN_() at test.rs:7 Disabled
Breakpoint #2, _MAIN_() at test.rs:18 Disabled
2. disable number
If a breakpoint number is specified, only that breakpoint is disabled.
ICVD>> dis 1
ICVD>> info b
Breakpoint #1, _MAIN_() at test.rs:13 Disabled
Breakpoint #2, _MAIN_() at test.rs:16 Enabled
ICVD>>
enable
The enable command enables a previously disabled breakpoint. There are two ways to
enable breakpoints:
1. enable
Using the command without a breakpoint number enables all breakpoints.
ICVD>> info b
Breakpoint #1, _MAIN_() at test.rs:7 Disabled
Breakpoint #2, _MAIN_() at test.rs:18 Disabled
ICVD>> en
ICVD>> info b
Breakpoint #1, _MAIN_() at test.rs:7 Enabled
Breakpoint #2, _MAIN_() at test.rs:18 Enabled
ICVD>>
2. enable number
If a breakpoint number is specified, only that breakpoint is disabled.
ICVD>> info b
Breakpoint #1, _MAIN_() at test.rs:7 Disabled
ICVD>> en 1
ICVD>> info b
Breakpoint #1, _MAIN_() at test.rs:7 Enabled
ICVD>>
finish
The finish command executes to the end of the function it is in. Execution does stop,
however, at breakpoints.
In this example, the run continues until the current function is completed.
8 | {
9 | t : integer = 0;
10 | y : integer = 10;
11 | z : integer = 100;
12 | for ( i=1 to x) {
--> 13 | t = t + dtoi((x * y**2) / z);
14 | }
15 | t = dtoi(t / x);
16 | note (" f2(3) = "+ t);
17 | }
ICVD>> fin
test.rs:16 f2(3) = 3
15 | t = dtoi(t / x);
16 | note (" f2(3) = "+ t);
17 | }
18 | xx:integer;
19 | f2(3);
--> 20 | note("end. ");
info break
The info break command lists out information about the breakpoints set during the PXL
Debugger session. There are two ways to list the breakpoint information:
1. info break
When specified without a breakpoint number, all breakpoints are listed.
ICVD>> info b
Breakpoint #1, _MAIN_() at test.rs:7 Enabled
Breakpoint #2, _MAIN_() at test.rs:16 Enabled
list
The list command list lines from the source code. There are four ways to list lines:
1. list
When specified without any option, the command lists 10 lines from the last printed
line.
1| #include <math.rh>
2| #include <diagnostics.rh>
3|
--> 4| f: list of integer = {1, 1};
5|
6| for (i in 2 thru 19) {
7| f.push_back(f[i-2] + f[i-1]);
8| }
ICVD>> list
9 |
10 | /* comment 1
11 | comment 2
12 | comment 3
13 | comment 4
14 | comment 5
15 | */
16 |
17 | for (i in 0 thru 19) {
18 | note("fibonacci(" + i + ") = " + f[i]);
ICVD>>
2. list line
Print outs, from the current file, five lines before and four lines after the specified line
number.
3. list file:line
Print outs, from the specified file, five lines before and four lines after the specified line
number.
ICVD>> list test.rs:4
1 | #include <math.rh>
2 | #include <diagnostics.rh>
3 |
--> 4 | f: list of integer = {1, 1};
5 |
6 | for (i in 2 thru 19) {
7 | f.push_back(f[i-2] + f[i-1]);
8 | }
ICVD>>
4. list function
Prints out five lines before and four lines after the first executable line within the
specified function.
ICVD>> list f2
4 |
5 | f2 : function (
6 | x : integer
7 | ) returning void
8 | {
9 | t : integer = 0;
10 | y : integer = 10;
11 | z : integer = 100;
12 | for ( i=1 to x) {
13 | t = t + dtoi((x * y**2) / z);
ICVD>>
next
The next command moves the PXL Debugger prompt to the next executable line in the
runset. A next on a function call completes function execution, and then moves to the next
executable line in the runset.
In this example, the next command is given at an executable line in the runset:
13 | t = t + dtoi((x * y**2) / z);
14 | }
15 | t = dtoi(t / x);
16 | note (" f2(3) = "+ t);
17 | }
--> 18 | xx:integer;
19 | f2(3);
20 | note("end. ");
ICVD>> n
14 | }
15 | t = dtoi(t / x);
16 | note (" f2(3) = "+ t);
17 | }
18 | xx:integer;
--> 19 | f2(3);
20 | note("end. ");
ICVD>>
print
The print command prints values of variables and simple expressions. The command
format is
print expression
quit
The quit command ends the debugging session and the IC Validator run.
1 | #include <math.rh>
2 | #include <diagnostics.rh>
3 |
--> 4 | f: list of integer = {1, 1};
5 |
6 | for (i in 2 thru 19) {
7 | f.push_back(f[i-2] + f[i-1]);
8 | }
ICVD>> q
IC Validator Run: Time= 0:00:04 User=0.06 System=0.01
Overall icv_engine time = 0:00:04
IC Validator is done.
record
The record command saves the commands from the current debugging session into an
ASCII file. There are two ways to save commands:
1. record
If a file name is not specified, the commands are saved in ICVpid.pdi, where pid is the
UNIX process ID.
ICVD>> record
Saving Commands to 'ICV22347.pdi'
ICVD>>
2. record file
You can specify a file name, with or without a path.
ICVD>> record myfile.cmd
Saving Commands to 'myfile.cmd'
ICVD>>
The ASCII file stores all commands, except break and list, verbatim from the PXL
Debugger. The break and list commands are stored with the resolved file path. For
example,
source
The source command executes the PXL Debugger commands in the specified file. After
the commands are completed, control of the PXL Debugger returns to the user unless the
PXL Debugger completed because of a quit command specified in the source file. The
command format is
source file
For example,
ICVD>> source ICVDebug9398.pdi
ICVD>> n
2 | #include <diagnostics.rh>
3 |
4 | f: list of integer = {1, 1};
5 |
6 | for (i in 2 thru 19) {
--> 7 | f.push_back(f[i-2] + f[i-1]);
8 | }
9 |
10 | /* comment 1
11 | comment 2
ICVD>> b 7
Breakpoint #1, _MAIN_() at test.rs:7
ICVD>> b 16
Breakpoint #2, _MAIN_() at test.rs:16
ICVD>> c
2 | #include <diagnostics.rh>
3 |
4 | f: list of integer = {1, 1};
5 |
6 | for (i in 2 thru 19) {
B --> 7 | f.push_back(f[i-2] + f[i-1]);
8 | }
9 |
10 | /* comment 1
11 | comment 2
Breakpoint #1, _MAIN_() at test.rs:7
End of commands from commandfile
ICVD>>
step
The step command allows you to step into functions calls, executing each line of within
the function. The cursor moves to the first executable line in the function.
In this example, the PXL Debugger steps through f2, where the first executable line is 9.
14 | }
15 | t = dtoi(t / x);
16 | note (" f2(3) = "+ t);
17 | }
18 | xx:integer;
--> 19 | f2(3);
20 | note("end. ");
ICVD>> s
4 |
5 | f2 : function (
6 | x : integer
7 | ) returning void
8 | {
--> 9 | t : integer = 0;
10 | y : integer = 10;
11 | z : integer = 100;
12 | for ( i=1 to x) {
13 | t = t + dtoi((x * y**2) / z);
ICVD>>
where
The where command prints the function call stack of a statement in the runset. In this
example the command shows that there have been two function calls.
ICVD>> s
22 | c = a + b;
23 | };
24 |
25 | sum: function (x: integer, y: integer) returning z: integer
26 | {
--> 27 | z = add(x,y);
28 | note("z = " + z);
29 | }
30 |
31 | q = add(x,y);
ICVD>> where
#0 | sum() at test.rs:27
#1 | _MAIN_() at test.rs:32
In this example the command shows that there have been three function calls.
ICVD>> s
17 | note("x is now #2 = " + x);
18 | note("y is now #2 = " + y);
19 |
20 | add: function (a: integer, b: integer) returning c: integer
21 | {
--> 22 | c = a + b;
23 | };
24 |
25 | sum: function (x: integer, y: integer) returning z: integer
26 | {
ICVD>> where
#0 | add() at test.rs:22
#1 | sum() at test.rs:27
#2 | _MAIN_() at test.rs:32
ICVD>>
E
PYDB Perl API
This appendix describes how to access the IC Validator error database (PYDB) using a
Perl application programming interface (API) that is provided with the IC Validator tool.
The PYDB Perl API gives you direct access to the error data produced by the IC Validator
tool. For example, you can
• Generate error information for a specific violation.
• Count violations for tracking purposes.
The database and the Perl API are described in the following sections:
• PYDB Perl API Overview
• Accessing the PYDB
• Examples of Common Tasks
• PYDB Schema
database is the PYDB. The structure of the tables and how they are related to each other
is called the database schema. See “PYDB Schema” for information about the schema.
To properly query a complex database, you must know the database schema. You can use
SQLite commands to access the database and return information. These commands can
be complex depending on the information you want to be returned.
For convenient access to error data without needing to learn the database schema, The
provided PYDB Perl API connects to and retrieves data from a PYDB. It makes interfacing
with the error database much easier than using SQLite commands. It has built-in functions
for
• Connecting to the PYDB server.
• Submitting queries and ensuring that multiple database servers are not competing with
each other.
• Providing high-level functions to retrieve PYDB specific information without needing to
create complex SQLite queries.
• Processing the results.
The PYDB Perl API is located in ICV_INSTALL_DIRECTORY/contrib/pydb/. The
subdirectories are shown in Table 73.
Table 73 Subdirectories of the pydb Directory
Subdirectory Description
bin/ Contains example scripts. These scripts show examples of how to use the
PYDB modules.
lib/ Contains public library modules. Each of these files has a short description
followed by detailed descriptions of all the class methods provided by the
module.
Include the following code in your Perl script before any database commands. This
code imports the needed modules from the IC Validator installation into the script.
push @INC, $ENV{"ICV_INSTALL_DIRECTORY"}."/contrib/pydb/lib";
require "PYDB.pm";
• Perform a direct SQLite query. The Perl API provides an interface similar to the DBI
module that comes with most Perl installations.
# Prepare the query
$sth = $dbh->prepare("SELECT CellID, CellName FROM pyCellNames");
if (! $sth) {
print "ERROR: $PYDB::errstr\n";
$dbh->Disconnect();
exit;
}
PYDB Schema
In the PYDB schema, errors are stored in error tables.
error databaseschema
• Each table is made up of rows of data that can reference several related columns,
similar to a spreadsheet.
• Each table needs at least one unique key to identify individual records. Some tables
reference several keys.
There is at least one table per command type, with additional tables for commands with
child errors. Error classification information is also stored in the PYDB in several tables.
Table 74 shows some of the commonly used tables in the PYDB.
Table 74 PYDB Organization
pyCellNames Contains cell information. Each command can write errors from
different cells.
pyCommands Contains one record for each runset command that produced
errors. Multiple commands, for example, the internal1()
and external2() functions, can write to the same violation
comment.
pyErrors_<command type> Contains individual error data. The error tables for a particular
command type are created on the fly and contain data columns
specific to that type of command.
pyViolations Defines the violation rule comments for which errors are stored
in the PYDB.
F
Third-Party Licenses
This appendix provides the license information for third-party software that is used in the
IC Validator tool.
• Licensing Overview
• OSS Package Notices
◦ ANTLR
◦ Boost
◦ cx_Freeze
◦ Flatbuffers
◦ libdp
◦ libxml2
◦ MariaDB Connector
◦ MariaDB
◦ MCPP Public Domain Code
◦ NumPy*
◦ patchELF
◦ pyparsing*
◦ Python 3.6 License
◦ scikit-learn
◦ SciPy*
◦ Shroud-1.0
◦ SQLite
◦ TclLib
◦ Tcl/Tk
Note:
The tbcload code is part of Tcl.
◦ zlib
◦ Zstandard
• Standard OSS License Text
◦ Apache-2.0
◦ Artistic-1.0-Perl
◦ GPL-1.0
◦ GPL-3.0
Licensing Overview
This document includes licensing information relating to free and open-source software
®
("OSS") included with Synopsys’s IC Validator products (the "SOFTWARE"). The terms
of the applicable OSS license(s) govern Synopsys's® distribution and your use of the
SOFTWARE. Synopsys® and the third-party authors, licensors, and distributors of the
SOFTWARE disclaim all warranties and all liability arising from any and all use and
distribution of the SOFTWARE. To the extent the OSS is provided under an agreement
®
with Synopsys that differs from the applicable OSS license(s), those terms are offered by
®
Synopsys alone.
®
Synopsys has reproduced below copyright and other licensing notices appearing within
®
the OSS packages. While Synopsys seeks to provide complete and accurate copyright
®
and licensing information for each OSS package, Synopsys does not represent or
warrant that the following information is complete, correct, or error-free. SOFTWARE
recipients are encouraged to (a) investigate the identified OSS packages to confirm
®
the accuracy of the licensing information provided herein and (b) notify Synopsys of
®
any inaccuracies or errors found in this document so that Synopsys may update this
document accordingly.
Certain OSS licenses (such as the GNU General Public Licenses, GNU Library/Lesser
General Public Licenses, Affero General Public Licenses, Mozilla Public Licenses,
Common Development and Distribution Licenses, Common Public License, and Eclipse
Public License) require that the source code corresponding to distributed OSS binaries be
made available to recipients or other requestors under the terms of the same OSS license.
Recipients or requestors who would like to receive a copy of such corresponding source
®
code should submit a request to Synopsys by post at:
Synopsys
Attn: Open Source Requests
690 E. Middlefield Road
Mountain View, CA 94043
Please provide the following information in all submitted OSS requests:
• The OSS packages for which you are requesting source code;
®
• The Synopsys product (and any available version information) with which the
requested OSS packages are distributed;
®
• An email address at which Synopsys may contact you regarding the request (if
available); and
• The postal address for delivery of the requested source code.
An asterisk (*) has been added to OSS package names to indicate that those components
may be included in software or hardware products developed using ZeBu. The OSS
licensing information for those components is also reproduced in the IC Validator
redistributable OSS notices text file included with the SOFTWARE.
ANTLR
ANTLR 2 License
[The BSD License]
Copyright (c) 2012 Terence Parr and Sam Harwell
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are
permitted provided that the following conditions are met:
• Redistributions of source code must retain the above copyright notice, this list of
conditions and the following disclaimer.
• Redistributions in binary form must reproduce the above copyright notice, this list of
conditions and the following disclaimer in the documentation and/or other materials
provided with the distribution.
• Neither the name of the author nor the names of its contributors may be used to
endorse or promote products derived from this software without specific prior written
permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS
BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT
OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Boost
Boost Software License - Version 1.0 - August 17th, 2003
Permission is hereby granted, free of charge, to any person or organization obtaining
a copy of the software and accompanying documentation covered by this license (the
“Software”) to use, reproduce, display, distribute, execute, and transmit the Software,
and to prepare derivative works of the Software, and to permit third-parties to whom the
Software is furnished to do so, all subject to the following:
The copyright notices in the Software and this entire statement, including the above
license grant, this restriction and the following disclaimer, must be included in all copies
of the Software, in whole or in part, and all derivative works of the Software, unless such
copies or derivative works are solely in the form of machine-executable object code
generated by a source language processor.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-
cx_Freeze
Project homepage/download site: http://cx-freeze.readthedocs.io/; https://pypi.python.org/
pypi/cx_Freeze/4.3.4
Project licensing notices: http://cx-freeze.readthedocs.io/en/latest/license.html
Copyright © 2007-2017, Anthony Tuininga.
NOTE: this license is derived from the Python Software Foundation License
which can be found at http://www.python.org/psf/license
OR THAT THE USE OF CX_FREEZE WILL NOT INFRINGE ANY THIRD PARTY
RIGHTS.
5. THE COPYRIGHT HOLDERS SHALL NOT BE LIABLE TO LICENSEE OR
ANY OTHER USERS OF CX_FREEZE FOR ANY INCIDENTAL, SPECIAL,
OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF MODIFYING,
DISTRIBUTING, OR OTHERWISE USING CX_FREEZE, OR ANY DERIVATIVE
THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
6. This License Agreement will automatically terminate upon a material breach of its terms
and conditions.
7. Nothing in this License Agreement shall be deemed to create any relationship of
agency, partnership, or joint venture between the copyright holders and Licensee. This
License Agreement does not grant permission to use copyright holder’s trademarks or
trade name in a trademark sense to endorse or promote products or services of Licensee,
or any third party.
8. By copying, installing or otherwise using cx_Freeze, Licensee agrees to be bound
by the terms and conditions of this License Agreement. Computronix® is a registered
trademark of Computronix (Canada) Ltd.
Flatbuffers
Project homepage/download site: http://google.github.io/flatbuffers/; https://github.com/
google/flatbuffers
Project licensing notices: https://github.com/google/flatbuffers/blob/master/LICENSE.txt
LICENSE.txt:
See Apache-2.0 in the Standard OSS License Text on page 491."
libdp
1. The "Software", below, refers to the Tcl-DP system, developed by the Tcl-DP group (in
either source-code, object-code or executable-code form), and related documentation,
and a "work based on the Software" means a work based on either the Software, on
part of the Software, or on any derivative work of the Software under copyright law:
that is, a work containing all or a portion of the Tcl-DP system, either verbatim or with
modifications. Each licensee is addressed as "you" or "Licensee."
2. Cornell University as the parent organization of the Tcl-DP group holds copyrights in
the Software. The copyright holder reserves all rights except those expressly granted to
licensees, and U.S. Government license rights.
libxml2
Libxml2 is the XML C parser and toolkit developed for the Gnome project (but usable
outside of the Gnome platform), it is free software available under the MIT License. The
MIT license:
The MIT License (MIT)
Copyright (c) <year> <copyright holders>
Permission is hereby granted, free of charge, to any person obtaining a copy of this
software and associated documentation files (the "Software"), to deal in the Software
without restriction, including without limitation the rights to use, copy, modify, merge,
publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to
whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or
substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR
IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
MariaDB Connector
Synopsys modifications: NoneMariaDB Connector is licensed under the GNU Lesser
General Public License, version 2.1, (LGPL-2.1). Software licensed under the LGPL-2.1
is free software that comes with ABSOLUTELY NO WARRANTY. You may modify,
redistribute, or otherwise use the GPL software under the terms of the LGPL-2.1 (http://
www.gnu.org/licenses/old-licenses/lgpl-2.1.html).
GNU LESSER GENERAL PUBLIC LICENSE
Version 2.1, February 1999
[This is the first released version of the Lesser GPL. It also counts as
the successor of the GNU Library Public License, version 2, hence the
version number 2.1.]
2. You may modify your copy or copies of the Library or any portion of it, thus forming a
work based on the Library, and copy and distribute such modifications or work under
the terms of Section 1 above, provided that you also meet all of these conditions:
• The modified work must itself be a software library.
• You must cause the files modified to carry prominent notices stating that you
changed the files and the date of any change.
• You must cause the whole of the work to be licensed at no charge to all third parties
under the terms of this License.
• If a facility in the modified Library refers to a function or a table of data to be
supplied by an application program that uses the facility, other than as an argument
passed when the facility is invoked, then you must make a good faith effort to
ensure that, in the event an application does not supply such function or table, the
facility still operates, and performs whatever part of its purpose remains meaningful.
(For example, a function in a library to compute square roots has a purpose that is
entirely well-defined independent of the application. Therefore, Subsection 2d requires
that any application-supplied function or table used by this function must be optional:
if the application does not supply it, the square root function must still compute square
roots.)
These requirements apply to the modified work as a whole. If identifiable sections
of that work are not derived from the Library, and can be reasonably considered
independent and separate works in themselves, then this License, and its terms, do
not apply to those sections when you distribute them as separate works. But when
you distribute the same sections as part of a whole which is a work based on the
Library, the distribution of the whole must be on the terms of this License, whose
permissions for other licensees extend to the entire whole, and thus to each and every
part regardless of who wrote it.
Thus, it is not the intent of this section to claim rights or contest your rights to work
written entirely by you; rather, the intent is to exercise the right to control the distribution
of derivative or collective works based on the Library.
In addition, mere aggregation of another work not based on the Library with the Library
(or with a work based on the Library) on a volume of a storage or distribution medium
does not bring the other work under the scope of this License.
3. You may opt to apply the terms of the ordinary GNU General Public License instead of
this License to a given copy of the Library. To do this, you must alter all the notices that
refer to this License, so that they refer to the ordinary GNU General Public License,
version 2, instead of to this License. (If a newer version than version 2 of the ordinary
GNU General Public License has appeared, then you can specify that version instead if
you wish.) Do not make any other change in these notices.
Once this change is made in a given copy, it is irreversible for that copy, so the ordinary
GNU General Public License applies to all subsequent copies and derivative works
made from that copy.
This option is useful when you wish to copy part of the code of the Library into a
program that is not a library.
4. You may copy and distribute the Library (or a portion or derivative of it, under Section
2) in object code or executable form under the terms of Sections 1 and 2 above
provided that you accompany it with the complete corresponding machine-readable
source code, which must be distributed under the terms of Sections 1 and 2 above on a
medium customarily used for software interchange.
If distribution of object code is made by offering access to copy from a designated
place, then offering equivalent access to copy the source code from the same place
satisfies the requirement to distribute the source code, even though third parties are
not compelled to copy the source along with the object code.
5. A program that contains no derivative of any portion of the Library, but is designed to
work with the Library by being compiled or linked with it, is called a "work that uses the
Library". Such a work, in isolation, is not a derivative work of the Library, and therefore
falls outside the scope of this License.
However, linking a "work that uses the Library" with the Library creates an executable
that is a derivative of the Library (because it contains portions of the Library), rather
than a "work that uses the library". The executable is therefore covered by this License.
Section 6 states terms for distribution of such executables.
When a "work that uses the Library" uses material from a header file that is part of
the Library, the object code for the work may be a derivative work of the Library even
though the source code is not. Whether this is true is especially significant if the work
can be linked without the Library, or if the work is itself a library. The threshold for this
to be true is not precisely defined by law.
If such an object file uses only numerical parameters, data structure layouts and
accessors, and small macros and small inline functions (ten lines or less in length),
then the use of the object file is unrestricted, regardless of whether it is legally a
derivative work. (Executables containing this object code plus portions of the Library
will still fall under Section 6.)
Otherwise, if the work is a derivative of the Library, you may distribute the object code
for the work under the terms of Section 6. Any executables containing that work also
fall under Section 6, whether or not they are linked directly with the Library itself.
6. As an exception to the Sections above, you may also combine or link a "work that
uses the Library" with the Library to produce a work containing portions of the Library,
and distribute that work under terms of your choice, provided that the terms permit
modification of the work for the customer's own use and reverse engineering for
debugging such modifications.
You must give prominent notice with each copy of the work that the Library is used in it
and that the Library and its use are covered by this License. You must supply a copy of
this License. If the work during execution displays copyright notices, you must include
the copyright notice for the Library among them, as well as a reference directing the
user to the copy of this License. Also, you must do one of these things:
• Accompany the work with the complete corresponding machine-readable source
code for the Library including whatever changes were used in the work (which must
be distributed under Sections 1 and 2 above); and, if the work is an executable
linked with the Library, with the complete machine-readable "work that uses the
Library", as object code and/or source code, so that the user can modify the Library
and then relink to produce a modified executable containing the modified Library.
(It is understood that the user who changes the contents of definitions files in the
Library will not necessarily be able to recompile the application to use the modified
definitions.)
• Use a suitable shared library mechanism for linking with the Library. A suitable
mechanism is one that (1) uses at run time a copy of the library already present
on the user's computer system, rather than copying library functions into the
executable, and (2) will operate properly with a modified version of the library, if the
user installs one, as long as the modified version is interface-compatible with the
version that the work was made with.
• Accompany the work with a written offer, valid for at least three years, to give the
same user the materials specified in Subsection 6a, above, for a charge no more
than the cost of performing this distribution.
• If distribution of the work is made by offering access to copy from a designated
place, offer equivalent access to copy the above specified materials from the same
place.
• Verify that the user has already received a copy of these materials or that you have
already sent this user a copy.
For an executable, the required form of the "work that uses the Library" must include
any data and utility programs needed for reproducing the executable from it. However,
as a special exception, the materials to be distributed need not include anything that
is normally distributed (in either source or binary form) with the major components
(compiler, kernel, and so on) of the operating system on which the executable runs,
unless that component itself accompanies the executable.
It may happen that this requirement contradicts the license restrictions of other
proprietary libraries that do not normally accompany the operating system. Such
a contradiction means you cannot use both them and the Library together in an
executable that you distribute.
7. You may place library facilities that are a work based on the Library side-by-side in
a single library together with other library facilities not covered by this License, and
distribute such a combined library, provided that the separate distribution of the work
based on the Library and of the other library facilities is otherwise permitted, and
provided that you do these two things:
• Accompany the combined library with a copy of the same work based on the
Library, uncombined with any other library facilities. This must be distributed under
the terms of the Sections above.
• Give prominent notice with the combined library of the fact that part of it is a work
based on the Library, and explaining where to find the accompanying uncombined
form of the same work.
8. You may not copy, modify, sublicense, link with, or distribute the Library except
as expressly provided under this License. Any attempt otherwise to copy, modify,
sublicense, link with, or distribute the Library is void, and will automatically terminate
your rights under this License. However, parties who have received copies, or rights,
from you under this License will not have their licenses terminated so long as such
parties remain in full compliance.
9. You are not required to accept this License, since you have not signed it. However,
nothing else grants you permission to modify or distribute the Library or its derivative
works. These actions are prohibited by law if you do not accept this License. Therefore,
by modifying or distributing the Library (or any work based on the Library), you indicate
your acceptance of this License to do so, and all its terms and conditions for copying,
distributing or modifying the Library or works based on it.
10. Each time you redistribute the Library (or any work based on the Library), the recipient
automatically receives a license from the original licensor to copy, distribute, link with
or modify the Library subject to these terms and conditions. You may not impose any
further restrictions on the recipients' exercise of the rights granted herein. You are not
responsible for enforcing compliance by third parties with this License.
11. If, as a consequence of a court judgment or allegation of patent infringement or for any
other reason (not limited to patent issues), conditions are imposed on you (whether by
court order, agreement or otherwise) that contradict the conditions of this License, they
do not excuse you from the conditions of this License. If you cannot distribute so as
to satisfy simultaneously your obligations under this License and any other pertinent
obligations, then as a consequence you may not distribute the Library at all. For
example, if a patent license would not permit royalty-free redistribution of the Library by
all those who receive copies directly or indirectly through you, then the only way you
could satisfy both it and this License would be to refrain entirely from distribution of the
Library.
If any portion of this section is held invalid or unenforceable under any particular
circumstance, the balance of the section is intended to apply, and the section as a
whole is intended to apply in other circumstances.
It is not the purpose of this section to induce you to infringe any patents or other
property right claims or to contest validity of any such claims; this section has the
sole purpose of protecting the integrity of the free software distribution system which
is implemented by public license practices. Many people have made generous
contributions to the wide range of software distributed through that system in reliance
on consistent application of that system; it is up to the author/donor to decide if he or
she is willing to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to be a consequence
of the rest of this License.
12. If the distribution and/or use of the Library is restricted in certain countries either by
patents or by copyrighted interfaces, the original copyright holder who places the
Library under this License may add an explicit geographical distribution limitation
excluding those countries, so that distribution is permitted only in or among countries
not thus excluded. In such case, this License incorporates the limitation as if written in
the body of this License.
13. The Free Software Foundation may publish revised and/or new versions of the Lesser
General Public License from time to time. Such new versions will be similar in spirit to
the present version, but may differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Library specifies a version
number of this License which applies to it and "any later version", you have the option
of following the terms and conditions either of that version or of any later version
published by the Free Software Foundation. If the Library does not specify a license
version number, you may choose any version ever published by the Free Software
Foundation.
14. If you wish to incorporate parts of the Library into other free programs whose
distribution conditions are incompatible with these, write to the author to ask for
permission. For software which is copyrighted by the Free Software Foundation, write
to the Free Software Foundation; we sometimes make exceptions for this. Our decision
will be guided by the two goals of preserving the free status of all derivatives of our free
software and of promoting the sharing and reuse of software generally.
NO WARRANTY
• BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO
WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE
LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT
MariaDB
Synopsys modifications: None
MariaDB is licensed under the GNU General Public License, version 2, (GPL-2.0).
Software licensed under the GPL-2.0 is free software that comes with ABSOLUTELY NO
WARRANTY. You may modify, redistribute, or otherwise use the GPL software under the
terms of the GPL-2.0 (http://www.gnu.org/licenses/old-licenses/gpl-2.0.html).''
GNU GENERAL PUBLIC LICENSE
Version 2.1, June 1991
Activities other than copying, distribution and modification are not covered by this License;
they are outside its scope. The act of running the Program is not restricted, and the output
from the Program is covered only if its contents constitute a work based on the Program
(independent of having been made by running the Program). Whether that is true depends
on what the Program does.
1. You may copy and distribute verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and appropriately publish
on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all
the notices that refer to this License and to the absence of any warranty; and give any
other recipients of the Program a copy of this License along with the Program.
You may charge a fee for the physical act of transferring a copy, and you may at your
option offer warranty protection in exchange for a fee.
2. You may modify your copy or copies of the Program or any portion of it, thus forming a
work based on the Program, and copy and distribute such modifications or work under
the terms of Section 1 above, provided that you also meet all of these conditions:
• You must cause the modified files to carry prominent notices stating that you
changed the files and the date of any change.
• You must cause any work that you distribute or publish, that in whole or in part
contains or is derived from the Program or any part thereof, to be licensed as a
whole at no charge to all third parties under the terms of this License.
• If the modified program normally reads commands interactively when run, you must
cause it, when started running for such interactive use in the most ordinary way, to
print or display an announcement including an appropriate copyright notice and a
notice that there is no warranty (or else, saying that you provide a warranty) and
that users may redistribute the program under these conditions, and telling the user
how to view a copy of this License. (Exception: if the Program itself is interactive but
does not normally print such an announcement, your work based on the Program is
not required to print an announcement.)
These requirements apply to the modified work as a whole. If identifiable sections
of that work are not derived from the Program, and can be reasonably considered
independent and separate works in themselves, then this License, and its terms, do
not apply to those sections when you distribute them as separate works. But when
you distribute the same sections as part of a whole which is a work based on the
Program, the distribution of the whole must be on the terms of this License, whose
permissions for other licensees extend to the entire whole, and thus to each and
every part regardless of who wrote it.
Thus, it is not the intent of this section to claim rights or contest your rights to
work written entirely by you; rather, the intent is to exercise the right to control the
distribution of derivative or collective works based on the Program.
In addition, mere aggregation of another work not based on the Program with
the Program (or with a work based on the Program) on a volume of a storage or
distribution medium does not bring the other work under the scope of this License.
3. You may copy and distribute the Program (or a work based on it, under Section 2) in
object code or executable form under the terms of Sections 1 and 2 above provided
that you also do one of the following:
Accompany it with the complete corresponding machine-readable source code,
which must be distributed under the terms of Sections 1 and 2 above on a medium
customarily used for software interchange; or,
Accompany it with a written offer, valid for at least three years, to give any third party,
for a charge no more than your cost of physically performing source distribution, a
complete machine-readable copy of the corresponding source code, to be distributed
under the terms of Sections 1 and 2 above on a medium customarily used for software
interchange; or,
Accompany it with the information you received as to the offer to distribute
corresponding source code. (This alternative is allowed only for noncommercial
distribution and only if you received the program in object code or executable form with
such an offer, in accord with Subsection b above.)
The source code for a work means the preferred form of the work for making
modifications to it. For an executable work, complete source code means all the source
code for all modules it contains, plus any associated interface definition files, plus
the scripts used to control compilation and installation of the executable. However,
as a special exception, the source code distributed need not include anything that
is normally distributed (in either source or binary form) with the major components
(compiler, kernel, and so on) of the operating system on which the executable runs,
unless that component itself accompanies the executable.
If distribution of executable or object code is made by offering access to copy from a
designated place, then offering equivalent access to copy the source code from the
same place counts as distribution of the source code, even though third parties are not
compelled to copy the source along with the object code.
4. You may not copy, modify, sublicense, or distribute the Program except as expressly
provided under this License. Any attempt otherwise to copy, modify, sublicense or
distribute the Program is void, and will automatically terminate your rights under this
License. However, parties who have received copies, or rights, from you under this
License will not have their licenses terminated so long as such parties remain in full
compliance.
5. You are not required to accept this License, since you have not signed it. However,
nothing else grants you permission to modify or distribute the Program or its derivative
works. These actions are prohibited by law if you do not accept this License. Therefore,
by modifying or distributing the Program (or any work based on the Program), you
indicate your acceptance of this License to do so, and all its terms and conditions for
copying, distributing or modifying the Program or works based on it.
6. Each time you redistribute the Program (or any work based on the Program), the
recipient automatically receives a license from the original licensor to copy, distribute
or modify the Program subject to these terms and conditions. You may not impose any
further restrictions on the recipients' exercise of the rights granted herein. You are not
responsible for enforcing compliance by third parties to this License.
7. If, as a consequence of a court judgment or allegation of patent infringement or for any
other reason (not limited to patent issues), conditions are imposed on you (whether by
court order, agreement or otherwise) that contradict the conditions of this License, they
do not excuse you from the conditions of this License. If you cannot distribute so as
to satisfy simultaneously your obligations under this License and any other pertinent
obligations, then as a consequence you may not distribute the Program at all. For
example, if a patent license would not permit royalty-free redistribution of the Program
by all those who receive copies directly or indirectly through you, then the only way you
could satisfy both it and this License would be to refrain entirely from distribution of the
Program.
If any portion of this section is held invalid or unenforceable under any particular
circumstance, the balance of the section is intended to apply and the section as a
whole is intended to apply in other circumstances.
It is not the purpose of this section to induce you to infringe any patents or other
property right claims or to contest validity of any such claims; this section has the
sole purpose of protecting the integrity of the free software distribution system,
which is implemented by public license practices. Many people have made generous
contributions to the wide range of software distributed through that system in reliance
on consistent application of that system; it is up to the author/donor to decide if he or
she is willing to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to be a consequence
of the rest of this License.
8. If the distribution and/or use of the Program is restricted in certain countries either
by patents or by copyrighted interfaces, the original copyright holder who places the
Program under this License may add an explicit geographical distribution limitation
excluding those countries, so that distribution is permitted only in or among countries
not thus excluded. In such case, this License incorporates the limitation as if written in
the body of this License.
9. The Free Software Foundation may publish revised and/or new versions of the General
Public License from time to time. Such new versions will be similar in spirit to the
present version, but may differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Program specifies a
version number of this License which applies to it and "any later version", you have
the option of following the terms and conditions either of that version or of any later
version published by the Free Software Foundation. If the Program does not specify
a version number of this License, you may choose any version ever published by the
Free Software Foundation.
10. If you wish to incorporate parts of the Program into other free programs whose
distribution conditions are different, write to the author to ask for permission. For
software which is copyrighted by the Free Software Foundation, write to the Free
Software Foundation; we sometimes make exceptions for this. Our decision will be
guided by the two goals of preserving the free status of all derivatives of our free
software and of promoting the sharing and reuse of software generally.
NO WARRANTY
• BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO
WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE
LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT
WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT
NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY
AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM
PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
REPAIR OR CORRECTION.
• IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN
WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY
MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE,
BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL,
INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR
INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS
OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED
BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE
WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS
BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
NumPy*
Project homepage/download site: http://www.numpy.org/; https://github.com/numpy/numpy
Project licensing notices: https://numpy.org/doc/stable/license.html
LICENSE.txt
* Neither the name of the NumPy Developers nor the names of any
contributors may be used to endorse or promote products derived from this
software without specific prior written permission.
-------------------------------------------------------------------------
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-------------------------------------------------------------------------
-------------------------------------------------------------------------
numpy/linalg/lapack_lite/LICENSE.txt:
$COPYRIGHT$
$HEADER$
- Neither the name of the copyright holders nor the names of its
contributors may be used to endorse or promote products derived from this
software without specific prior written permission.
tools/npy_tempita/license.txt:
License
-------
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
numpy/core/src/multiarray/dragon4.c:
/*
* Copyright (c) 2014 Ryan Juckett
* http://www.ryanjuckett.com/
*
* This software is provided 'as-is', without any express or implied
* warranty. In no event will the authors be held liable for any damages
* arising from the use of this software.
*
* Permission is granted to anyone to use this software for any purpose,
* including commercial applications, and to alter it and redistribute it
* freely, subject to the following restrictions:
*
patchELF
Project homepage/download site: https://nixos.org/patchelf.html
Project licensing notices:
COPYING:
pyparsing*
Project homepage/download site: https://pypi.python.org/pypi/pyparsing
Project licensing notices:
# module pyparsing.py
#
# Copyright (c) 2003-2016 Paul T. McGuire
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
# IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
--------------------------------------------
scikit-learn
Project homepage/download site: http://scikit-learn.org/
Project licensing notices:
New BSD License
c. Neither the name of the Scikit-learn Developers nor the names of its
contributors may be used to endorse or promote products derived from this
software without specific prior written permission.
SciPy*
Project homepage/download site: https://www.SciPy.org; https://github.com/SciPy/SciPy
Project licensing notices: https://github.com/SciPy/SciPy/blob/master/LICENSE.txt
Copyright (c) 2001, 2002 Enthought, Inc. All rights reserved.
Copyright (c) 2003-2017 SciPy Developers. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
a. Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
b. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
c. Neither the name of Enthought nor the names of the SciPy Developers
may be used to endorse or promote products derived from this software
without specific prior written permission. THIS SOFTWARE IS PROVIDED
BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS
OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS
BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
THE POSSIBILITY OF SUCH DAMAGE.
SciPy bundles a number of libraries that are compatibly licensed. We
list these here.
Name: Numpydoc
Files: doc/sphinxext/numpydoc/*
License: 2-clause BSD
For details, see doc/sphinxext/LICENSE.txt
Name: scipy-sphinx-theme
Files: doc/scipy-sphinx-theme/*
License: 3-clause BSD, PSF and Apache 2.0
For details, see doc/sphinxext/LICENSE.txt
Name: Six
Files: scipy/_lib/six.py
License: MIT
For details, see the header inside scipy/_lib/six.py
Name: Decorator
Files: scipy/_lib/decorator.py
License: 2-clause BSD
For details, see the header inside scipy/_lib/decorator.py
Name: ID
Files: scipy/linalg/src/id_dist/*
License: 3-clause BSD
For details, see scipy/linalg/src/id_dist/doc/doc.tex
Name: L-BFGS-B
Files: scipy/optimize/lbfgsb/*
License: BSD license
For details, see scipy/optimize/lbfgsb/README
Name: SuperLU
Files: scipy/sparse/linalg/dsolve/SuperLU/*
License: 3-clause BSD
For details, see scipy/sparse/linalg/dsolve/SuperLU/License.txt
Name: ARPACK
Files: scipy/sparse/linalg/eigen/arpack/ARPACK/*
License: 3-clause BSD
For details, see scipy/sparse/linalg/eigen/arpack/ARPACK/COPYING
Name: Qhull
Files: scipy/spatial/qhull/*
License: Qhull license (BSD-like)
For details, see scipy/spatial/qhull/COPYING.txt
Name: Cephes
Files: scipy/special/cephes/*
License: 3-clause BSD
Distributed under 3-clause BSD license with permission from the author,
see https://lists.debian.org/debian-legal/2004/12/msg00295.html
Cephes Math Library Release 2.8: June, 2000
Copyright 1984, 1995, 2000 by Stephen L. Moshier
This software is derived from the Cephes Math Library and is
incorporated herein by permission of the author.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of the <organization> nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER>
BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
THE POSSIBILITY OF SUCH DAMAGE.
Name: Faddeeva
Files: scipy/special/Faddeeva.*
License: MIT
Copyright (c) 2012 Massachusetts Institute of Technology
#
# Copyright (c) 2005-2015, Michele Simionato
# All rights reserved.
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
# Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# Redistributions in bytecode form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in
# the documentation and/or other materials provided with the
# distribution.
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# HOLDERS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
# OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
# TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
# DAMAGE.
scipy/linalg/src/id_dist/doc/doc.tex
The present document and all of the software
in the accompanying distribution (which is contained in the directory
{\tt id\_dist} and its subdirectories, or in the file
{\tt id\_dist.tar.gz})\, is
\bigskip
Copyright \copyright\ 2014 by P.-G. Martinsson, V. Rokhlin,
Y. Shkolnisky, and M. Tygert.
\bigskip
All rights reserved.
\bigskip
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
\begin{enumerate}
\item Redistributions of source code must retain the above copyright
notice, this list of conditions, and the following disclaimer.
\item Redistributions in binary form must reproduce the above copyright
notice, this list of conditions, and the following disclaimer in the
documentation and/or other materials provided with the distribution.
\item None of the names of the copyright holders may be used to endorse
or promote products derived from this software without specific prior
written permission.
\end{enumerate}
\bigskip
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNERS BE
OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
OR
OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
scipy/optimize/lbfgsb/README
License of L-BFGS-B (Fortran code)
==================================
The version included here (in lbfgsb.f) is 3.0 (released April 25,
2011).
It was written by Ciyou Zhu, Richard Byrd, and Jorge Nocedal
<[email protected]>. It carries the following condition for use:
"""
This software is freely available, but we expect that all publications
describing work using this software, or all commercial products using
it,
quote at least one of the references given below. This software is
released under the BSD License.
References
* R. H. Byrd, P. Lu and J. Nocedal. A Limited Memory Algorithm for Bound
Constrained Optimization, (1995), SIAM Journal on Scientific and
Statistical Computing, 16, 5, pp. 1190-1208.
* C. Zhu, R. H. Byrd and J. Nocedal. L-BFGS-B: Algorithm 778: L-BFGS-B,
FORTRAN routines for large scale bound constrained optimization (1997),
ACM Transactions on Mathematical Software, 23, 4, pp. 550 - 560.
* J.L. Morales and J. Nocedal. L-BFGS-B: Remark on Algorithm 778:
L-BFGS-B, FORTRAN routines for large scale bound constrained
optimization
(2011), ACM Transactions on Mathematical Software, 38, 1.
"""
License for the Python wrapper
==============================
Copyright (c) 2004 David M. Cooke <[email protected]>
Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit
persons to whom the Software is furnished to do so, subject to the
following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN
NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR
THE
USE OR OTHER DEALINGS IN THE SOFTWARE.
scipy/sparse/linalg/dsolve/SuperLU/License.txt
Copyright (c) 2003, The Regents of the University of California, through
Lawrence Berkeley National Laboratory (subject to receipt of any
required
approvals from U.S. Dept. of Energy)
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
(1) Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
(2) Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
(3) Neither the name of Lawrence Berkeley National Laboratory, U.S. Dept.
of Energy nor the names of its contributors may be used to endorse
or promote products derived from this software without specific prior
written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
scipy/sparse/linalg/eigen/arpack/ARPACK/COPYING
BSD Software License
Pertains to ARPACK and P_ARPACK
Copyright (c) 1996-2008 Rice University.
Developed by D.C. Sorensen, R.B. Lehoucq, C. Yang, and K. Maschhoff.
All rights reserved.
Arpack has been renamed to arpack-ng.
Copyright (c) 2001-2011 - Scilab Enterprises
Updated by Allan Cornet, Sylvestre Ledru.
Copyright (c) 2010 - Jordi Gutiérrez Hermoso (Octave patch)
Copyright (c) 2007 - Sébastien Fabbro (gentoo patch)
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
- Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
- Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer listed in
Shroud-1.0
Project homepage: http://www.cpan.org/authors/id/C/CR/CRAIC/shroud-1.0
Project license:
#!/usr/bin/perl
# shroud
# Copyright 2000 Robert Jones, Craic Computing, All rights reserved.
# This program is free software; you can redistribute it and/or modify it
# under the same terms as Perl itself.
# The software is supplied as is, with absolutely no warranty.
Perl5 is Copyright (C) 1993-2005, by Larry Wall and others.
For those of you that choose to use the GNU General Public License, my
interpretation of the GNU General Public License is that no Perl script
falls under the terms of the GPL unless you explicitly put said script
under the terms of the GPL yourself.
Furthermore, any object code linked with perl does not automatically
fall under the terms of the GPL, provided such object code only adds
definitions of subroutines and variables, and does not otherwise impair
the resulting interpreter from executing any standard Perl script.
I consider linking in C subroutines in this manner to be the moral
equivalent of defining subroutines in the Perl language itself. You may
sell such an object file as proprietary provided that you provide or
offer to provide the Perl source, as specified by the GNU General Public
License. (This is merely an alternate way of specifying input to the
program.) You may also sell a binary produced by the dumping of a running
Perl script that belongs to you, provided that you provide or offer to
provide the Perl source as specified by the GPL. (The fact that a Perl
interpreter and your code are in the same binary file is, in this case, a
form of mere aggregation.)
-- Larry Wall
See Artistic-1.0-Perl in the Standard OSS License Text on page 491."See GPL-1.0 in the
Standard OSS License Text on page 491."
SQLite
Project homepage/download site: https://www.sqlite.org/index.html
Project licensing notices: https://www.sqlite.org/copyright.html
SQLite Is Public Domain
All of the code and documentation in SQLite has been dedicated to the
public domain by the authors. All code authors, and representatives
of the companies they work for, have signed affidavits dedicating
their contributions to the public domain and originals of those signed
affidavits are stored in a firesafe at the main offices of Hwaci. Anyone
is free to copy, modify, publish, use, compile, sell, or distribute the
original SQLite code, either in source code form or as a compiled binary,
for any purpose, commercial or non-commercial, and by any means.
All of the deliverable code in SQLite has been written from scratch. No
code has been taken from other projects or from the open internet. Every
line of code can be traced back to its original author, and all of those
authors have public domain dedications on file. So the SQLite code base
is clean and is uncontaminated with licensed code from other projects.
Warranty of Title
SQLite is in the public domain and does not require a license. Even
so, some organizations want legal proof of their right to use SQLite.
Circumstances where this occurs include the following:
You are using SQLite in a jurisdiction that does not recognize the public
domain.
You are using SQLite in a jurisdiction that does not recognize the right
of an author to dedicate their work to the public domain.
You want to hold a tangible legal document as evidence that you have the
legal right to use and distribute SQLite.
Your legal department tells you that you have to purchase a license.
If any of the above circumstances apply to you, Hwaci, the company that
employs all the developers of SQLite, will sell you a Warranty of Title
for SQLite. A Warranty of Title is a legal document that asserts that
the claimed authors of SQLite are the true authors, and that the authors
have the legal right to dedicate the SQLite to the public domain, and
that Hwaci will vigorously defend against challenges to those claims.
All proceeds from the sale of SQLite Warranties of Title are used to fund
continuing improvement and support of SQLite.
Contributed Code
TclLib
This software is copyrighted by Ajuba Solutions and other parties.The following terms
apply to all files associated with the software unless explicitly disclaimed in individual
files.The authors hereby grant permission to use, copy, modify, distribute, and license this
software and its documentation for any purpose, provided that existing copyright notices
are retained in all copies and that this notice is included verbatim in any distributions.
No written agreement, license, or royalty fee is required for any of the authorized uses.
Modifications to this software may be copyrighted by their authors and need not follow the
licensing terms described here, provided that the new terms are clearly indicated on the
first page of each file where they apply.
IN NO EVENT SHALL THE AUTHORS OR DISTRIBUTORS BE LIABLE TO ANY
PARTY FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL
Tcl/Tk
The following terms apply to the all versions of the core Tcl/Tk releases, the Tcl/Tk browser
plug-in version 2.0, Tcllib, and TclBlend and Jacl version 1.0. Please note that the TclPro
tools are under a different license agreement. This agreement is part of the standard Tcl/
Tk distribution as the file named "license.terms".
Tcl/Tk License Terms
This software is copyrighted by the Regents of the University of California, Sun
Microsystems, Inc., Scriptics Corporation, Ajuba Solutions, and other parties. The
following terms apply to all files associated with the software unless explicitly disclaimed in
individual files.
The authors hereby grant permission to use, copy, modify, distribute, and license this
software and its documentation for any purpose, provided that existing copyright notices
are retained in all copies and that this notice is included verbatim in any distributions.
No written agreement, license, or royalty fee is required for any of the authorized uses.
Modifications to this software may be copyrighted by their authors and need not follow the
licensing terms described here, provided that the new terms are clearly indicated on the
first page of each file where they apply.
IN NO EVENT SHALL THE AUTHORS OR DISTRIBUTORS BE LIABLE TO ANY PARTY
FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES
ARISING OUT OF THE USE OF THIS SOFTWARE, ITS DOCUMENTATION, OR ANY
DERIVATIVES THEREOF, EVEN IF THE AUTHORS HAVE BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
zlib
License
/* zlib.h -- interface of the 'zlib' general purpose compression library version 1.2.8, April
28th, 2013
Copyright (C) 1995-2013 Jean-loup Gailly and Mark Adler
This software is provided 'as-is', without any express or implied warranty. In no event will
the authors be held liable for any damages arising from the use of this software.
Permission is granted to anyone to use this software for any purpose, including
commercial applications, and to alter it and redistribute it freely, subject to the following
restrictions:
1. The origin of this software must not be misrepresented; you must not claim that you
wrote the original software. If you use this software in a product, an acknowledgement
in the product documentation would be appreciated but is not required.
2. Altered source versions must be plainly marked as such, and must not be
misrepresented as being the original software.
3. This notice may not be removed or altered from any source distribution.
Jean-loup Gailly ([email protected]) Mark Adler ([email protected])
*/
Zstandard
Project homepage: https://github.com/facebook/zstd
Project license: https://github.com/facebook/zstd/blob/master/LICENSE
BSD License
* Neither the name Facebook nor the names of its contributors may be
used to endorse or promote products derived from this software without
specific prior written permission.
Apache-2.0
Project homepage/download site: http://google.github.io/flatbuffers/; https://github.com/
google/flatbuffers
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,and
distribution as defined by Sections 1 through 9 of this document.
"Legal Entity" shall mean the union of the acting entity and all other
entities that control, are controlled by, or are under common control
with that entity. For the purposes of this definition, "control" means
(i) the power, direct or indirect, to cause the direction or management
of such entity, whether by contract or otherwise, or (ii) ownership
of fifty percent (50%) or more of the outstanding shares, or (iii)
beneficial ownership of such entity.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation source,
and configuration files.
"Derivative Works" shall mean any work, whether in Source or Object form,
that is based on (or derived from) the Work and for which the editorial
revisions, annotations, elaborations, or other modifications represent,
as a whole, an original work of authorship. For the purposes of this
License, Derivative Works shall not include works that remain separable
from, or merely link (or bind by name) to the interfaces of, the Work and
Derivative Works thereof.
and in Source or Object form, provided that You meet the following
conditions:
(a) You must give any other recipients of the Work or Derivative Works a
copy of this License; and
(b) You must cause any modified files to carry prominent notices stating
that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works that You
distribute, all copyright, patent, trademark, and attribution notices
from the Source form of the Work, excluding those notices that do not
pertain to any part of the Derivative Works; and
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions for
use, reproduction, or distribution of Your modifications, or for any
such Derivative Works as a whole, provided Your use, reproduction, and
distribution of the Work otherwise complies with the conditions stated in
this License.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain a
copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Artistic-1.0-Perl
The "Artistic License"
Preamble
Definitions:
"Freely Available" means that no fee is charged for the item itself,
though there may be fees involved in handling the item. It also means
that recipients of the item may redistribute it under the same conditions
they received it.
• You may make and give away verbatim copies of the source form of the
Standard Version of this Package without restriction, provided that
you duplicate all of the original copyright notices and associated
disclaimers.
• You may apply bug fixes, portability fixes and other modifications
derived from the Public Domain or from the Copyright Holder. A Package
modified in such a way shall still be considered the Standard Version.
• You may otherwise modify your copy of this Package in any way,
provided that you insert a prominent notice in each changed file
stating how and when you changed that file, and provided that you do
at least ONE of the following:
• The name of the Copyright Holder may not be used to endorse or promote
products derived from this software without specific prior written
permission.
• THIS PACKAGE IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED
WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF
MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
The End
GPL-1.0
GNU GENERAL PUBLIC LICENSE Version 1, February 1989
Preamble
We protect your rights with two steps: (1) copyright the software, and
(2) offer you this license which gives you legal permission to copy,
distribute and/or modify the software.
Also, for each author's protection and ours, we want to make certain that
everyone understands that there is no warranty for this free software.
If the software is modified by someone else and passed on, we want its
recipients to know that what they have is not the original, so that any
problems introduced by others will not reflect on the original authors'
reputations.
• You may copy and distribute verbatim copies of the Program's source
code as you receive it, in any medium, provided that you conspicuously
and appropriately publish on each copy an appropriate copyright notice
and disclaimer of warranty; keep intact all the notices that refer to
this General Public License and to the absence of any warranty; and
give any other recipients of the Program a copy of this General Public
License along with the Program. You may charge a fee for the physical
act of transferring a copy.
• You may modify your copy or copies of the Program or any portion of
it, and copy and distribute such modifications under the terms of
Paragraph 1 above, provided that you also do the following:
◦ cause the modified files to carry prominent notices stating that you
changed the files and the date of any change; and
◦ cause the whole of any work that you distribute or publish, that in
whole or in part contains the Program or any part thereof, either
with or without modifications, to be licensed at no charge to all
third parties under the terms of this General Public License (except
that you may choose to grant warranty protection to some or all
third parties, at your option).
◦ You may charge a fee for the physical act of transferring a copy,
and you may at your option offer warranty protection in exchange for
a fee.
Source code for a work means the preferred form of the work for
making modifications to it. For an executable file, complete source
code means all the source code for all modules it contains; but,
as a special exception, it need not include source code for modules
which are standard libraries that accompany the operating system
on which the executable file runs, or for standard header files or
definitions files that accompany that operating system.
copies, or rights to use copies, from you under this General Public
License will not have their licenses terminated so long as such parties
remain in full compliance.
6. Each time you redistribute the Program (or any work based on the
Program), the recipient automatically receives a license from the
original licensor to copy, distribute or modify the Program subject to
these terms and conditions. You may not impose any further restrictions
on the recipients' exercise of the rights granted herein. 7. The Free
Software Foundation may publish revised and/or new versions of the
General Public License from time to time. Such new versions will be
similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
NO WARRANTY
END OF TERMS AND CONDITIONS Appendix: How to Apply These Terms to Your
New Programs
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
Also add information on how to contact you by electronic and paper mail.
The hypothetical commands `show w' and `show c' should show the
appropriate parts of the General Public License. Of course, the commands
you use may be called something other than `show w' and `show c'; they
could even be mouse-clicks or menu items--whatever suits your program.
You should also get your employer (if you work as a programmer) or your
school, if any, to sign a "copyright disclaimer" for the program, if
necessary. Here a sample; alter the names:
GPL-3.0
GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007
Preamble
The GNU General Public License is a free, copyleft license for software
and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to your
programs, too.
To protect your rights, we need to prevent others from denying you these
rights or asking you to surrender the rights. Therefore, you have certain
Developers that use the GNU GPL protect your rights with two steps: (1)
assert copyright on the software, and (2) offer you this License giving
you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run modified
versions of the software inside them, although the manufacturer can do
so. This is fundamentally incompatible with the aim of protecting users'
freedom to change the software. The systematic pattern of such abuse
occurs in the area of products for individuals to use, which is precisely
where it is most unacceptable. Therefore, we have designed this version
of the GPL to prohibit the practice for those products. If such problems
arise substantially in other domains, we stand ready to extend this
provision to those domains in future versions of the GPL, as needed to
protect the freedom of users.
0. Definitions.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of
an exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
1. Source Code.
The "source code" for a work means the preferred form of the work for
making modifications to it. "Object code" means any non-source form of a
work.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's System
Libraries, or general-purpose tools or generally available free programs
which are used unmodified in performing those activities but which
are not part of the work. For example, Corresponding Source includes
interface definition files associated with source files for the work, and
the source code for shared libraries and dynamically linked subprograms
that the work is specifically designed to require, such as by intimate
data communication or control flow between those subprograms and other
parts of the work.
The Corresponding Source need not include anything that users can
regenerate automatically from other parts of the Corresponding Source.
The Corresponding Source for a work in source code form is that same
work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not convey,
without conditions so long as your license otherwise remains in force.
You may convey covered works to others for the sole purpose of having
them make modifications exclusively for you, or provide you with
facilities for running those works, provided that you comply with the
terms of this License in conveying all material for which you do not
control copyright. Those thus making or running the covered works for you
must do so exclusively on your behalf, under your direction and control,
on terms that prohibit them from making any copies of your copyrighted
material outside their relationship with you.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
You may charge any price or no price for each copy that you convey, and
you may offer support or warranty protection for a fee.
a) The work must carry prominent notices stating that you modified it,
and giving a relevant date.
c) You must license the entire work, as a whole, under this License to
anyone who comes into possession of a copy. This License will therefore
apply, along with any applicable section 7 additional terms, to the whole
of the work, and all its parts, regardless of how they are packaged. This
License gives no permission to license the work in any other way, but it
does not invalidate such permission if you have separately received it.
You may convey a covered work in object code form under the terms of
sections 4 and 5, provided that you also convey the machine-readable
Corresponding Source under the terms of this License, in one of these
ways:
c) Convey individual copies of the object code with a copy of the written
offer to provide the Corresponding Source. This alternative is allowed
only occasionally and noncommercially, and only if you received the
object code with such an offer, in accord with subsection 6b.
Corresponding Source in the same way through the same place at no further
charge. You need not require recipients to copy the Corresponding Source
along with the object code. If the place to copy the object code is a
network server, the Corresponding Source may be on a different server
(operated by you or a third party) that supports equivalent copying
facilities, provided you maintain clear directions next to the object
code saying where to find the Corresponding Source. Regardless of what
server hosts the Corresponding Source, you remain obligated to ensure
that it is available for as long as needed to satisfy these requirements.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
7. Additional Terms.
When you convey a copy of a covered work, you may at your option remove
any additional permissions from that copy, or from any part of it.
(Additional permissions may be written to require their own removal
in certain cases when you modify the work.) You may place additional
permissions on material, added by you to a covered work, for which you
have or can give appropriate copyright permission.
e) Declining to grant rights under trademark law for use of some trade
names, trademarks, or service marks; or
If you add terms to a covered work in accord with this section, you must
place, in the relevant source files, a statement of the additional terms
that apply to those files, or a notice indicating where to find the
applicable terms.
8. Termination.
However, if you cease all violation of this License, then your license
from a particular copyright holder is reinstated (a) provisionally,
unless and until the copyright holder explicitly and finally terminates
your license, and (b) permanently, if the copyright holder fails to
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
You are not required to accept this License in order to receive or run a
copy of the Program. Ancillary propagation of a covered work occurring
solely as a consequence of using peer-to-peer transmission to receive a
copy likewise does not require acceptance. However, nothing other than
this License grants you permission to propagate or modify any covered
work. These actions infringe copyright if you do not accept this License.
Therefore, by modifying or propagating a covered work, you indicate your
acceptance of this License to do so.
Each time you convey a covered work, the recipient automatically receives
a license from the original licensors, to run, modify and propagate that
work, subject to this License. You are not responsible for enforcing
compliance by third parties with this License.
You may not impose any further restrictions on the exercise of the rights
granted or affirmed under this License. For example, you may not impose
a license fee, royalty, or other charge for exercise of rights granted
under this License, and you may not initiate litigation (including a
cross-claim or counterclaim in a lawsuit) alleging that any patent claim
11. Patents.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
If the Program specifies that a proxy can decide which future versions of
the GNU General Public License can be used, that proxy's public statement
of acceptance of a version permanently authorizes you to choose that
version for the Program.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
You should have received a copy of the GNU General Public License along
with this program. If not, see <http://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
<program> Copyright (C) <year> <name of author> This program comes with
ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software,
and you are welcome to redistribute it under certain conditions; type
`show c' for details.
The hypothetical commands `show w' and `show c' should show the
appropriate parts of the General Public License. Of course, your
program's commands might be different; for a GUI interface, you would use
an "about box".
You should also get your employer (if you work as a programmer) or
school, if any, to sign a "copyright disclaimer" for the program, if
necessary. For more information on this, and how to apply and follow the
GNU GPL, see <http://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications
with the library. If this is what you want to do, use the GNU Lesser
General Public License instead of this License. But first, please read
<http://www.gnu.org/philosophy/why-not-lgpl.html>.