2008 Tvlsi Ieee1500
2008 Tvlsi Ieee1500
2008 Tvlsi Ieee1500
Original Citation:
Benso A., Di Carlo S., Prinetto P., Zorian Y. (2008). IEEE Standard 1500 Compliance Verification
for Embedded Cores. In: IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI)
SYSTEMS, vol. 16 n. 4, pp. 397-407. - ISSN 1063-8210
Availability:
This version is available at : http://porto.polito.it/1785476/ since: January 2009
Publisher:
IEEE
Published version:
DOI:10.1109/TVLSI.2008.917412
Terms of use:
This article is made available under terms and conditions applicable to Open Access Policy Article
("Public - All rights reserved") , as described at http://porto.polito.it/terms_and_conditions.
html
Porto, the institutional repository of the Politecnico di Torino, is provided by the University Library
and the IT-Services. The aim is to enable open access to all the world. Please share with us how
this access benefits you. Your story matters.
Abstract—Core-based design and reuse are the two key elements There exist strong functional similarities between the tra-
for an efficient system-on-chip (SoC) development. Unfortunately, ditional system-on-board and the SoC design. However, their
they also introduce new challenges in SoC testing, such as core test manufacturing test process is quite different.
reuse and the need of a common test infrastructure working with
cores originating from different vendors. The IEEE 1500 Standard In a system-on-board, the IC provider is responsible for the
for Embedded Core Testing addresses these issues by proposing a design, manufacturing, and testing of the ICs components of
flexible hardware test wrapper architecture for embedded cores, the system. In this context, the system integrator is, with re-
together with a core test language (CTL) used to describe the im- spect to testing for manufacturing defects, only responsible for
plemented wrapper functionalities. Several intellectual property the interconnections between the ICs. The boundary scan test,
providers have already announced IEEE Standard 1500 compli-
ance in both existing and future design blocks. In this paper, we also known as JTAG or the IEEE Standard 1149.1 [3], is a
address the problem of guaranteeing the compliance of a wrapper well-known technique to address this board-level interconnect
architecture and its CTL description to the IEEE Standard 1500. test problem. In the SoC design flow, the core provider delivers
This step is mandatory to fully trust the wrapper functionalities a description of the core design to the system integrator at dif-
in applying the test sequences to the core. We present a systematic ferent possible levels: soft (register-transfer level), firm (netlist),
methodology to build a verification framework for IEEE Standard
1500 compliant cores, allowing core providers and/or integrators or hard (technology-dependent layout). Being the core delivered
to verify the compliance of their products (sold or purchased) to only as a model, a manufacturing test is at this stage impossible.
the standard. Therefore, the test responsibility of the system integrator now
not only concerns the interconnect logic and wiring between the
Index Terms—Electronic design automation (EDA) tools, IEEE
1500 Standard, unified modeling language (UML), verification of cores, but also the IP cores themselves.
embedded cores. For an SoC design, the test of embedded cores constitutes a
large part of the overall IC test, and hence substantially impacts
the total IC quality level as well as its test development effort
I. INTRODUCTION and associated costs. The adoption and design of adequate test
and diagnosis strategies is therefore a major challenge in the
production of SoCs.
The IEEE Standard Testability Method for Embedded Core-
T HE INCREASED density and performance of advanced
silicon technologies made system-on-a-chip (SoC) appli-
cation-specific integrated circuits (ASICs) possible. SoCs bring
Based Integrated Circuits (IEEE Standard 1500 [4]) addresses
the specific challenges that come with testing deeply embedded
together a set of functions and technology features on a single reusable cores supplied by diverse providers, who often use dif-
die of enormous complexity. Each component is available as ferent hardware description levels and mixed technologies [2],
a predesigned functional block that comes as an intellectual [5], [6]. The IEEE Standard 1500 defines a scalable standard ar-
property (IP) embedded core, reusable in different designs. chitecture to facilitate and support the testing of embedded cores
These so-called embedded cores make it easier to import tech- and the associated interconnect circuitry in a SoC.
nologies to a new system and differentiate the corresponding The standard does not cover the core’s internal test methods
product by leveraging IP advantages. Most importantly, the or chip-level test access configuration. The standardization ef-
fort focuses on non-merged cores (cores that are tested as stand-
use of embedded cores shortens the time-to-market for new
systems thanks to a heavy design reuse [1]. alone units) and provides the following two main supports:
What makes designing systems with IP cores an attractive • standardized core test language (CTL), capable of ex-
methodology (e.g., design reuse, heterogeneity, reconfigura- pressing all test-related information that need to be
transferred from the core provider to the core user;
bility, and customizability) also makes testing and debugging
of these systems a very complex challenge [2]. • standardized (but configurable and scalable) core test
wrapper, allowing easy test access to the core in an SoC
design.
Manuscript received March 2, 2007.
A. Benso, S. Di Carlo, and P. Prinetto are with the Department of In-
Several publications presented solutions to build SoCs with
formation and Automation Technologies, Politecnico di Torino, 10129 IEEE Standard 1500 testability features [7], [8]; nevertheless, by
Torino, Italy (e-mail: [email protected]; [email protected]; analyzing the IEEE Standard 1500, it is clear that implementing
[email protected]). a fully compliant core is not trivial. The IEEE Standard 1500
Y. Zorian is with the Virage Logic, Fremont, CA 94538 USA (e-mail: zo-
[email protected]). is in fact articulated in a large set of architectural rules. Some
Digital Object Identifier 10.1109/TVLSI.2008.917412 of them are very specific on particular design aspects, whereas
1063-8210/$25.00 © 2008 IEEE
398 IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, VOL. 16, NO. 4, APRIL 2008
The term dynamic refers to the fact that the verification pat-
terns/stimulus are generated and applied to the design over a
number of clock cycles, and the corresponding results are col-
lected and compared against a reference/golden model. An EDA
simulator is used both to compute the values of all signals during
the simulation and to compare the expected values with the cal-
culated ones.
Simple dynamic verification has a main drawback: only a
subset of all possible behaviors can be verified in a time-bound
simulation run. Testing all possible behaviors under every pos-
sible combination of input stimuli is in most of the cases an un-
Fig. 8. IEEE 1500 Metadata UML class diagram.
feasible task since the test space is too large to be fully covered
in a reasonable amount of time.
To overcome this problem, the number of verification pat-
the metadata model and all the semantic checks can now be per- terns applied to the wrapper has to be statistically significant
formed on the metadata model, where the naming convention but not complete. To do this, verification input patterns are gen-
is independent from the corresponding core. erated randomly under a set of constraints, which are expressed
A very high level UML class diagram modeling an example as mathematical expressions limiting the set of legal values on
of the metadata model is provided in Fig. 8. the input signals that drive the design. In this way, the simulator
Thanks to its internal naming convention and corresponding generates random values and constraints ensure that the gener-
signal mapping, it becomes much easier to run checks on the ated scenarios are valid and plausible.
content of the metadata model. For example, a simple rule that To further optimize this constrained-random generation, cov-
says erage-driven verification is used. Functional coverage metrics
are automatically and in real-time recorded in order to ascertain
whether (and how effectively) a particular test verified (or is ver-
ifying) a given feature; this information can then be fed back into
the generation process in order to drive additional verification
could be easily verified performing a control like efforts more effectively towards the required goal. The coverage
metrics are evaluated on coverage monitoring points defined by
the user. The market offers a number of tools that are able to
support this dynamic (or functional) verification methodology.
The most used ones are Specman Elite (Cadence) [15] and Vera
This operation will automatically check if a mapping exists (Synopsys) [16]. Besides the different verification and pattern
between the WS_Bypass template in the metadata model, and a generation engines, all of them apply the verification patterns to
(user-defined) register in the CTL information model. the target design using a verification component placed around
the core under analysis. The verification component, which is
a behavioral-level module described using a proprietary verifi-
B. Behavioral Rules Verifier
cation language (e for Specman Elite, OpenVera for Vera), per-
The behavioral verification is the most complex step of the forms the constrained-random generation of the verification pat-
verification process. Behavioral and mixed rules can only be val- terns, applies them, and is directly controlled by the verification
idated using a behavioral approach. Differently from the syntax engine monitoring the current coverage reached in the verifica-
and semantic verification, the behavioral approach is based on tion process.
a functional simulation of the core test wrapper and of the core To efficiently apply this verification approach to perform the
itself. Simulation is the only effective approach to verify com- behavioral verification of IEEE Standard 1500 rules it is neces-
pliancy of time-related rules, protocols, signal connections, and sary to do the following.
correct instructions implementation. The IEEE Standard 1500 • Create a rule verification component for each rule (or
Verification Framework models the behavioral rule verifier with subset of similar rules). The rule verification component
the Behavioral_Rule_Verifier class in Fig. 3. The class imple- is in charge of generating the verification patterns that,
ments the do_Verification() method that actually performs the applied during the simulation, will allow checking that
behavioral verification. The behavioral rule verifier needs to all the architectural and behavioral aspects of the rule are
simulate the core/wrapper it therefore has a Simulate associa- correctly implemented in the design.
tion with the IEEE_1500_Compliant_Core class (see Fig. 3). • Identify in the design the rule coverage points for that rule.
The IEEE_1500_Compliant_Core class models the core under A coverage point is a signal/register in the wrapper that
verification (it usually corresponds to an HDL description of the needs to be monitored in order to evaluate the coverage
core). reached during the verification process on that particular
A well-known approach to perform this type of verification rule.
is the so-called dynamic, coverage-driven, constrained-random The concept of coverage applied to the IEEE Standard 1500
simulation functional verification. rules is very important. Not only allows to compute the number
404 IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, VOL. 16, NO. 4, APRIL 2008
of verified rules over the set of available ones, but also allows to
compute a compliancy level of each individual rule. As briefly
explained before, if the combination of possible input patterns/
internal states of the components involved in the rule is too large,
the verification engine resorts to the constrained-random pattern
generation. This will lead to the application of a subset of all Fig. 10. Verification plan for semantic and behavioral rules.
possible patterns and therefore to a rule coverage or compliancy
level possibly lower than 100%.
The most challenging issue in this phase is to write rule ver- C. Semantic and Behavioral Verification Plan
ification components and identify rule coverage points that are
independent from the specific core or wrapper under analysis. As already introduced in Section III, the definition of an ef-
Again, the name mapping stored in the metadata model de- ficient test plan is critical in reducing the overall verification
scribed in Section V-A can be used to abstract the verification costs. The sequence diagram of Fig. 4 introduced a first level of
component from the specific core implementation. scheduling among the three different verification activities per-
Another very important issue to be considered in this phase formed by the IEEE Standard Verification Framework. We can
is whether the core under verification is a black- or a white-box. now enter into more details and model how these activities can
The difference is in the amount of available information on the be integrated together.
core’s internal structure. In a black-box core the only available 1) Syntax Verification Test Plan: The syntax verification is
information is the input/output (I/O) interface. For IP protection a one-step verification process. It just performs a single action
the internal structure of a black-box core is unknown (except to that consists in parsing the CTL files provided with the IEEE
the core designer, of course). A core integrator, who buys cores Standard 1500 information model. No particular scheduling is
from different vendors, usually deals only with black-box cores. needed for this verification.
On the other hand, a core designer always has all the information 2) Semantic and Behavioral Verification Test Plan: Both se-
regarding the core, and therefore uses the white-box approach. mantic and behavioral verification involve the verification of a
From the IEEE Standard 1500 compliancy verification point set of IEEE Standard 1500 rules. The order in which the rules
of view, the difference between a black- and a white-box core are verified is extremely important, first of all to guarantee the
directly impacts the degree of compliancy that can be verified, correctness of the result of the verification process and second
whereas in a white-box core, all the internal signals of the core to reduce the overall verification time by performing different
can be controlled and/or observed and therefore all rules can be actions in parallel.
thoroughly verified, in a black-box core only the rules (or the The definition of the rule hierarchy described in Section V
portions of them) that do not require directly controlling or ob- and modeled using the Depend_On association in the UML
serving the core internal signals can be fully verified. Full IEEE class diagram of Fig. 3 implicitly defines a verification plan
Standard 1500 verification compliancy can only be achieved for both semantic and behavioral rules. The plan is modeled by
when dealing with white-box cores or with black-box cores im- the UML sequence diagram of Fig. 10. The semantic/behavioral
plementing only the basic requirements of the standard. verifier sends a message to each rule object asking to perform
This very important aspect of the behavioral verifica- the verification. The message is an asynchronous message (half
tion is modeled in the IEEE Standard Verification frame- arrow) that means that the verification of the full set of rules is
work by the Black_Box_Core_Test_Wrapper class and the performed in parallel.
White_Box_Core_Test_Wrapper class. Each rule, before performing its actual verification, first con-
Fig. 9 finally summarizes the main components of the behav- trols the status of the rules it is dependent on. It is possible to
ioral verifier using an UML components diagram. have the following three different situations:
BENSO et al.: IEEE STANDARD 1500 COMPLIANCE VERIFICATION FOR EMBEDDED CORES 405
VI. IMPLEMENTATION
This section overviews a prototype implementation of the
abstract UML model of the IEEE Standard 1500 Verification Fig. 11. Lexical rules example.
Framework presented in the previous sections. The full frame-
work has been implemented using the Java [17] developing
platform. The Java language is a full object oriented program- #3 is more complex: formally, it gives a global surface descrip-
ming language and it is a perfect candidate to implement UML tion without providing, for instance, what type of characters are
models. allowed or not. To formally describe these types of rules, an ex-
Besides the general interface provided by the implementa- tensive analysis of the IEEE Standard 1450.6 has been neces-
tion, we will focus in this section on a possible implementation sary.
of the three types of verifiers (i.e., syntax, semantic, and behav-
ioral), being these components the real core of the verification B. Semantic Verification
framework. As already introduced in Section V-B, the most crit- The implementation of the semantic verification mainly con-
ical component in the framework is the behavioral rule verifier. sists in the implementation of the metadata structure defined in
This component needs to perform a dynamic functional veri- Section V-A. We implemented the metadata as a collection of
fication. Many commercial platforms are available to perform Java classes as defined by the UML class diagram in Fig. 8. The
this type of verification; for our implementation, we used the task of populating the metadata with the actual CTL information
Specman Elite [15] suite. is demanded to the CTL parser implemented in Section VI-A.
Finally, each IEEE Standard 1500 semantic rule has been trans-
A. Syntax Verification lated into a set of queries performed on the metadata. Actually,
Several technologies allow the automatic implementations of this type of implementation is not the more efficient one. Due to
parsers starting from the description of a formal grammar. We the number of rules to verify and to the complexity of a real core,
successfully implemented a syntax analyzer by using two open- the memory size and the complexity of the metadata structure in
source tools. JFLEX [18] is a lexical analyzer generator for Java, memory may become very high. For a commercial implementa-
developed by V. Paxson. JFLEX is designed to work together tion, the use of a data base management system (DBMS) would
with the parser generator CUP [19] by S. Hudson. The two tools be recommended.
allow the description of a formal language grammar (in our case
C. Behavioral Verification
the CTL grammar) and the automatic generation of a Java envi-
ronment (collection of Java classes) able to perform the syntax The implementation of the behavioral verification process re-
analysis of a text file, according to the defined language. The lies on the use of the Specman Elite verification environment
definition of the set of lexical rules and of the grammar needed [15] interfaced with the Synopsys VCS simulator [20]. Specman
to implement the parser has been performed by analyzing the Elite allows the definition of functional verification plans by
IEEE CTL definition [10] and by translating a set of informal using an object oriented language named e. e is a complete ver-
definition contained in the standard into a formal definition suit- ification language that allows the definition of the following:
able for the generation of the parsers. • built-in data generation from objects definition and con-
Fig. 11 shows an example of how syntax definition of the straints;
IEEE Standard 1450.6 (CTL) are translated into formal defini- • notation of time like HDL simulators;
tions needed to generate the parser by using JFLEX and CUP. In • built-in parallel execution;
the example, we have three types of informal rules: rules #1 and • HDL interface; read from and write to HDL signals at any
#2 describe with a simple, precise, and intuitive vocabulary how hierarchical level; call HDL tasks;
the functional rules will be and indicate directly the definition • predefined verification capabilities; automates handling
of a set of lexical definitions (i.e., “newline” and “tab”). Rule checks without having to write complex procedural code;
406 IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, VOL. 16, NO. 4, APRIL 2008
D. Application Cases
The functionalities of the IEEE Standard 1500 Verification
Framework prototype have been tested on a simple IEEE Stan-
dard 1500 compliant core implementing a four bit counter with
the following characteristics:
• CLOCK input used as counting clock;
• RESET input to reset the counting state;
• LOAD input to force a new start value for the counting;
• 4-bit input DIN that indicates the start value used when
LOAD is high;
• 4-bit output COUNT that indicates the actual counting Fig. 12. Behavioral rule violation example.
value.
This core has been wrapped with a IEEE Standard 1500 core
test wrapper having the following characteristics: in the near future, with the introduction of IEEE Standard 1500
• instruction register (WIR), 3 bit length; compliant wrappers in all IP cores on the market, a verification
• bypass register (WBY), 1 bit length; framework following the model presented in this paper will be
• boundary register (WBR), 8 bit length; able to increase productivity, reduce design time, and optimize
• optional TransferDR wrapper serial control; the test plan of very complex SoCs.
• four implemented instructions: WS_BYPASS,
ACKNOWLEDGMENT
WS_PRELOAD, WS_INTEST, WS_EXTEST.
The verification process run on the wrapper/core was suc- The authors would like to thank D. Scollo, G. Politano, A.
cessful from the very beginning, when it allowed to discover Mouth, and L. Melchionda for their help in the development of
design bugs we involuntarily introduced the wrapper design. this manuscript.
In order to carefully validate the verification capabilities of
REFERENCES
the prototype, we generated a set of different core test wrap-
[1] R. Gupta and Y. Zorian, “Introducing core-based system design,” IEEE
pers, systematically violating different rules of the standard. As Des. Test. Comput., vol. 14, no. 2, pp. 15–25, Oct. 1997.
an example, Fig. 12 shows the result of the behavioral verifica- [2] Y. Zorian, E. Marinissen, and S. Dey, “Testing embedded-core-based
tion in case of a non-compliant wrapper. In the example, rule system chips,” IEEE Computer, vol. 32, no. 6, pp. 52–60, Jun. 1999.
[3] IEEE Standard Test Access Port and Boundary-Scan Architecture,
number 10.2.1.c fails. The prototype highlights this violation IEEE Std. 1149.1, 1990.
and also provides the waveform obtained by the simulation to [4] IEEE Standard Testability Method for Embedded Core-Based Inte-
provide a better understanding of the reasons that led to the rule grated Circuits, IEEE Std. 1500, 2005.
[5] L. Jin-Fu, H. Hsin-Jung, C. Jeng-Bin, S. Chih-Pin, W. Cheng-Wen,
violation. On going work is focusing on creating a violation-pro- S.-I. C. C. Cheng, H. Chi-Yi, and L. Hsiao-Ping, “A hierarchical test
grammable wrapper, where different violations can be enabled methodology for systems on chip,” IEEE Micro, vol. 22, no. 5, pp.
or disabled in order to verify the efficiency of the verification 69–81, Sep. 2002.
[6] Y. Zorian, “Test requirements for embedded core-based systems and
framework also in presence of multiple violations in the same IEEE p1500,” in Proc. IEEE Int. Test Conf. (ITC), Nov. 1997, pp.
wrapper. 191–199.
[7] T. McLaurin and S. Ghosh, “Etm10 incorporates hardware segment of
VII. CONCLUSION ieee p1500,” IEEE Des. Test. Comput., vol. 19, no. 3, pp. 6–11, May
2002.
In this paper, we presented a systematic methodology and an [8] S. Picchiotino, M. Diaz-Nava, B. Forest, S. Engels, and R. Wilson,
associated formal model to build a verification framework for “Platform to validate soc designs and methodologies targeting
nanometer cmos technologies,” in Proc. IP-SOC, Dec. 2004, pp.
IEEE Standard 1500 compliant cores. The framework is able to 39–44.
check if the implementation of the wrapper provided with an [9] I. Diamantidis, T. Oikonomou, and S. Diamantidis, “Towards an IEEE
IP core correctly follows the architectural and behavioral rules verification infrastructure a comprehensive approach,” in Proc. ClubV,
Mar. 2005, p. 1500.
defined in the IEEE Standard 1500. The proposed framework [10] IEEE Core Test Language, IEEE Std. 1450.6, 2005.
targets different possible users, from the core designer to the [11] Object Management Group, Needham, MA, “UML official website,”
core integrator, and therefore is able to guarantee various level 2008 [Online]. Available: http://www.uml.org/
[12] Object Management Group, Needham, MA, “Object management
of compliancy depending on the amount of information about group official website,” 2008 [Online]. Available: http://www.omg.org/
the internal core structure available to the user. We also pre- [13] N. Wirth, Compiler Construction. Reading, MA: Addison-Wesley,
sented a proof-of-concept of the proposed model implementing 1996.
[14] Standard Test Interface Language (STIL) Standard Test Interface Lan-
a prototype of the verification framwork in Java and with the guage (STIL) Standard Test Interface Language (STIL) Standard Inter-
support of the Specman Elite verification toolkit. We believe that face Test Language (STIL), IEEE Std. 1450.0, 1999.
BENSO et al.: IEEE STANDARD 1500 COMPLIANCE VERIFICATION FOR EMBEDDED CORES 407
[15] Cadence, San Jose, CA, “Specman elite home page,” 2008 [Online]. Paolo Prinetto received the M.S. degree in electronic
Available: http://www.verisity.com/products/specman.html engineering from Politecnico di Torino, Torino, Italy.
[16] Synopsys, Mountain View, CA, “Vera Web Site,” 2008 [Online]. Avail- He is a Full Professor with the Department of
able: http://www.synopsys.com/products/vera/vera.html Computer Engineering, Politecnico di Torino and
[17] Sun Microsystems, “Java official website,” 2008 [Online]. Available: a Joint Professor with the University of Illinois,
http://www.java.com Chicago. His research interests include testing, test
[18] Gerwin Klein, “Jflex home page,” 2008 [Online]. Available: generation, BIST, and dependability.
http://jflex.de/index.html Prof. Prinetto is a Golden Core Member of the
[19] GVU Center, Georgia Institute of Technology, Atlanta, “Cup home IEEE Computer Society and he has served on the
page,” 2008 [Online]. Available: http://www2.cs.tum.edu/projects/cup/ IEEE Computer Society TTTC: Test Technology
[20] Synopsys, Mountain View, CA, “Vcs home page,” [Online]. Available: Technical Council as an elected chair.
http://www.synopsys.com/products/simulation/simulation.html