0% found this document useful (0 votes)
32 views20 pages

Verification Basics

Uploaded by

Vetrivel V
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views20 pages

Verification Basics

Uploaded by

Vetrivel V
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Understanding

From Basics to Advance

Anoushka Tripathi
1 VLSI TECH WITH ANOUSHKA | VERIFICATION BASICS TO ADVANCE

What is Verification?

Verification is not just about writing testbenches or running a bunch of tests. It’s a process that
ensures the design you’ve created works exactly as you intended.

Think about it this way:

• When you taste a dish while cooking, you’re verifying the flavor is what you want.

• When you match landmarks to a map, you’re verifying you’re heading in the right
direction.

These everyday activities are examples of verification!

In this chapter, we’ll explore the basics of verification:

• Why it’s important and how much it costs.

• How to make sure you’re checking for the right things in your design.

• The differences between testing and verification.

• How verification helps in reusing designs, and the challenges of reusing verification
itself.

What is a Testbench?

A testbench is a tool we use to test a design by simulating how it works. It’s a piece of code that:

1. Sends inputs to the design.

2. Watches how the design responds.

Testbenches are often written in System Verilog, but they can also use external files or even
code written in C.

Imagine the testbench as a tiny universe for your design—it controls everything that happens.
No inputs or outputs come from the outside world. It’s all contained within this closed system.

The real challenge in verification is deciding:

• What inputs to send to the design.

• What the correct outputs should look like if the design is working perfectly.

This process ensures that the design behaves as expected in every possible scenario.
2 VLSI TECH WITH ANOUSHKA | VERIFICATION BASICS TO ADVANCE

The Importance of Verification

In modern hardware design, verification is one of the most critical and time-consuming
activities. It ensures that the design works as intended, and without it, the risk of errors, delays,
or failures increases dramatically. Here’s a breakdown of why verification is essential and how it
can be made more efficient:

Why Does Verification Take So Much Effort?

• 70% of Design Effort:


In the age of multi-million-gate ASICs, FPGAs, reusable intellectual property (IP), and
system-on-chip (SoC) designs, verification accounts for nearly 70% of the total design
effort. This is because modern designs are incredibly complex, with numerous features
and interactions to validate.

• Dedicated Teams:
To manage this complexity, many design teams have more verification engineers than
RTL designers—sometimes twice as many. These engineers focus entirely on ensuring
that the design meets its specifications.

Why is Verification on the Critical Path?

• Verification often lies on the project's "critical path," meaning it determines the overall
timeline. Several factors contribute to this:

o The shortage of skilled verification engineers.

o The sheer volume of code and scenarios to verify.

o Verification sometimes starts late, only after the design is complete. This delay
can cause project schedules to slip further.

• To address this, new tools and methodologies aim to speed up verification by enabling
parallel work, using higher-level abstractions, and automating repetitive tasks.

How Can Verification Time Be Reduced?

1. Through Parallelism:

o Parallelizing tasks allows multiple engineers or tools to work simultaneously.

o For example, digging a hole can be sped up by having multiple workers with
shovels. Similarly, multiple testbenches can be written and debugged in parallel
with the design's implementation.

2. Through Abstraction:

o Abstraction allows engineers to focus on higher-level tasks without worrying


about every low-level detail.
3 VLSI TECH WITH ANOUSHKA | VERIFICATION BASICS TO ADVANCE

o Example: Instead of digging with shovels, using a backhoe speeds up the


process. In verification, working at a transaction or protocol level (instead of
dealing with raw binary signals) enables faster testbench development.

o Caution: Higher abstraction levels reduce control over details, so it’s important
to switch between abstraction levels when needed.

3. Through Automation:

o Automation allows machines or tools to perform repetitive tasks quickly and


predictably.

o In verification, automation can include tools that generate testbenches, create


bus-functional models, or run randomized tests.

o While full automation is not possible due to the variety of designs and scenarios,
domain-specific automation tools can significantly reduce manual effort.

Randomization as a Verification Tool

• Randomization can act as a form of automation by generating diverse test scenarios


automatically.

• By constraining random inputs to valid conditions, most of the interesting and


challenging cases can be covered.

o Example: A pool vacuum randomly moves along the bottom, covering most
areas without manual guidance. Similarly, constrained random testing can
explore edge cases in a design, freeing up engineers for more critical tasks.

• Randomized testing can also run overnight or on multiple systems simultaneously,


increasing efficiency.

Balancing Abstraction and Detail

• Effective verification requires a balance between high-level abstraction (e.g., protocol-


level testing) and low-level detail (e.g., signal-level errors).

• Testbenches should be flexible enough to switch between these levels during execution.
For instance:

o A testbench might verify protocol-level functionality at a high level but switch to


a lower level to inject a specific error (e.g., a parity fault) and observe the
response.

Why Verification Matters

Without proper verification, designs are prone to errors, delays, and costly rework. Verification
ensures that the final product meets expectations, avoids failures, and reaches the market on
4 VLSI TECH WITH ANOUSHKA | VERIFICATION BASICS TO ADVANCE

time. By using parallelism, abstraction, and automation effectively, verification efforts can be
optimized, making the entire process more efficient and reliable.

This holistic approach makes verification not just a task but the backbone of successful
hardware development.

The Reconvergence Model Explained

The reconvergence model is a way to visualize and understand the verification process in a
structured manner. It focuses on ensuring that any transformation applied to a design produces
the expected outcome by comparing the result with the original intent.

Purpose of the Reconvergence Model

The model answers the critical question: "What are you verifying?" Verification is not just
about finding errors; it is about confirming that the output of a process or transformation
matches the intended design or specification. Without this understanding, the verification
process lacks direction and purpose.

Key Components of the Reconvergence Model

1. Input Specification:

o This is the original design intent or the baseline.

o Examples include a functional specification or a behavioral model.

2. Transformation:

o Any process that changes the input specification to produce an output.

o Examples include:

▪ Writing RTL (Register-Transfer Level) code.

▪ Adding scan chains for testing.

▪ Synthesizing RTL into a gate-level netlist.

▪ Performing physical design steps like layout.

3. Verification:

o This step compares the output of the transformation against the input
specification.

o The process ensures that the transformation does not deviate from the intended
design.

4. Common Origin:

o Verification relies on a shared reference point (common origin) between the


input and the output.
5 VLSI TECH WITH ANOUSHKA | VERIFICATION BASICS TO ADVANCE

o Without this, there is no baseline for comparison, and effective verification


cannot take place.

How the Reconvergence Model Works

The model shows two paths starting from the common origin:

1. Transformation Path:
The original specification undergoes a transformation to produce the output (e.g., RTL
coding or synthesis).

2. Verification Path:
A separate process is used to check whether the output matches the intent of the
original specification.

These two paths reconverge at the original specification, ensuring alignment between what was
intended and what was produced.

Application in Hardware Design

The reconvergence model is widely applicable in hardware design projects to ensure


correctness at each stage of the process:

• RTL Coding Verification:


Ensures the written RTL code matches the high-level specification.

• Synthesis Verification:
Confirms that the gate-level netlist produced by synthesizing RTL code retains the
intended functionality.

• Physical Design Verification:


Verifies that the layout preserves timing and logical integrity.

Why is the Reconvergence Model Important?

1. Maintains Design Integrity:


By consistently comparing transformations against the specification, the design
remains aligned with its intent.

2. Prevents Errors from Propagating:


Errors caught early during verification prevent issues from compounding in subsequent
design stages.
6 VLSI TECH WITH ANOUSHKA | VERIFICATION BASICS TO ADVANCE

3. Provides a Structured Verification Process:


The model ensures that every transformation has a clear verification step, creating a
reliable flow for complex designs.

The Human Factor in Verification

The Human Factor introduces variability and potential errors in the verification process when
human interpretation is required to perform transformations, such as converting specifications
into RTL (Register Transfer Level) code. While verification strives to ensure correctness, human
involvement often creates challenges that need to be addressed through careful practices and
complementary mechanisms.

Key Challenges with Human Involvement

1. Subjectivity in Interpretation:

o Transformations like RTL coding require interpreting a written specification.

o The design is based on the implementer’s understanding, which may deviate


from the original intent.

2. Verification Against Interpretation:

o When the same person performs both design and verification, the process may
confirm their interpretation rather than the specification itself.

o Misinterpretations in the design phase may remain undetected during


verification.

3. Uncertainty and Unrepeatability:

o Human intervention introduces variability in outcomes.

o Errors arising from this variability can propagate through the process unless
adequately addressed.

Mechanisms to Mitigate Human Errors

Three complementary strategies can be applied to reduce the impact of human errors:

1. Automation:

o Definition: Eliminate human intervention by automating processes.

o Benefits:
7 VLSI TECH WITH ANOUSHKA | VERIFICATION BASICS TO ADVANCE

▪ Ensures consistency and repeatability.

▪ Reduces the scope for human error.

o Limitations:

▪ Not feasible for creative or poorly defined processes, such as complex


hardware design.

2. Poka-Yoke (Mistake Proofing):

o Definition: Design systems and processes to make human errors less likely or
inconsequential.

o Implementation:

▪ Break down interventions into simple, foolproof steps.

▪ Standardize steps to minimize ambiguity and ensure consistent results.

o Challenges:

▪ Works best for well-defined processes.

▪ Less effective for tasks requiring significant ingenuity.

3. Redundancy:

o Definition: Duplicate the transformation or verification effort to catch errors.

o Approaches:

▪ Independent Verification: A second individual independently checks the


work.

▪ Parallel Transformations: Two separate teams or individuals perform the


same task, and their outputs are compared.

o Applications:

▪ Common in high-reliability environments like aerospace and ASIC


design, where errors have high stakes.

▪ Redundancy ensures that ambiguity in the specification is resolved


through multiple perspectives.

o Cost Considerations:

▪ Redundancy is expensive but justified in scenarios where errors have


severe consequences.

Reconvergence Model and Human Factors

1. Interpretation-Centered Verification (Figure 1-4):

o Process:
8 VLSI TECH WITH ANOUSHKA | VERIFICATION BASICS TO ADVANCE

▪ The specification is interpreted by a designer to produce RTL code.

▪ Verification checks the design against this interpretation, not the original
specification.

o Problem:

▪ If the interpretation is incorrect, verification fails to detect it, leading to


unintentional errors.

2. Redundancy-Centered Verification (Figure 1-5):

o Process:

▪ A second individual or process verifies the RTL code independently,


reconciling it against the original specification.

▪ The outcome is validated against the original intent rather than a single
interpretation.

o Benefits:

▪ Guards against misinterpretation and ensures alignment with the


specification.

▪ Enhances reliability in ambiguous or critical design tasks.

What Is Being Verified?

The process of verification focuses on determining whether a design meets its intended goals,
but the specific transformation being verified depends on the origin and reconvergence
points of the verification process. Different tools and techniques focus on verifying different
aspects, such as equivalence, properties, or functional intent.

Origin and Reconvergence Points

• These points define what transformation is being checked.

• Verification tools like formal verification, property checking, and functional verification
rely on these points to determine their focus.

Understanding these points is critical to know what exactly is being verified, as they influence
whether the design conforms to its specification, or merely to an interpretation of it.

Types of Verification Processes

1. Formal Verification

Formal verification uses mathematical methods to prove that specific properties of a design
hold true. It does not eliminate the need for writing testbenches and is applied in two main
categories:
9 VLSI TECH WITH ANOUSHKA | VERIFICATION BASICS TO ADVANCE

a. Equivalence Checking

• Definition:
Compares two models (e.g., RTL to gate-level netlists) to ensure the transformation
preserves functionality.

• Key Applications:

o Verifying that synthesis transformations (e.g., scan-chain insertion, clock-tree


synthesis) maintain design correctness.

o Comparing minor revisions of RTL code to avoid running full simulations.

o Ensuring manual RTL implementations match legacy designs.

• Advantages:

o Finds subtle synthesis or tool-chain bugs.

o Proven effective, such as identifying a functional bug in an arithmetic operator


with over 48-bit operations.

• Reconvergence Model:
Verifies that the output matches the logical intent of the input transformation.

b. Property Checking

• Definition:
Proves specific assertions about the design's behavior based on defined properties.

• Key Applications:

o Ensuring state machines have no unreachable or isolated states.

o Verifying temporal relationships between signals.

o Checking for deadlock conditions or interface behavior (e.g., SystemVerilog


assertions).

• Challenges:

o Identifying meaningful, non-trivial assertions that reflect external requirements.

o Limited capability for high-level or complex behavioral properties.

• Reconvergence Model:
Focuses on verifying specific properties rather than general design correctness.

2. Functional Verification

• Definition:
Functional verification ensures that a design aligns with its intended functionality as
per its specification.

• Key Insights:
10 VLSI TECH WITH ANOUSHKA | VERIFICATION BASICS TO ADVANCE

o Verifies the design's intent rather than just its logical transformations.

o Relies on testbenches to simulate real-world scenarios and uncover


discrepancies.

• Limitations:

o While it can prove the presence of bugs, it cannot prove their absence.

o Specification documents, often written in natural language, are prone to


interpretation errors.

• Reconvergence Model:
Maps the design's behavior back to the specification to ensure it meets its intent.

The Role of Assertions and Testbenches

• Assertions:

o Used in property checking to formally specify and verify behavioral expectations.

o Best for low-level signal relationships and temporal properties.

o Require careful crafting to avoid trivialities that merely restate design behavior.

• Testbenches:

o Essential for functional verification, enabling simulation of scenarios and


validation of design intent.

o Cannot guarantee exhaustive testing or absolute correctness.

Limitations and Philosophical Note

• Verification vs. Proof:

o Verification shows consistency with a specification but cannot mathematically


prove that a design is entirely correct unless the specification itself is formally
precise.

o Misinterpretations or ambiguities in the specification create a fundamental


challenge for all verification processes.

• Absence of Errors:

o You can prove the presence of a bug with a single example of failure.

o However, proving the absence of bugs is impossible without exhaustive


verification, which is typically impractical for complex designs.

Automation in Functional Verification

Automation is a cornerstone of effective verification because it reduces the time and effort
required to detect and resolve bugs. Some tools automate routine checks, allowing engineers to
11 VLSI TECH WITH ANOUSHKA | VERIFICATION BASICS TO ADVANCE

focus on more complex issues. For example, simulators are crucial for functional verification
since they run simulations of the design to check if it behaves as expected. However, tools like
linting and code coverage go further, automating processes that would otherwise consume
significant time and effort.

Key Responsibilities of a Verification Engineer and Project Manager

Verification engineers must choose the right technologies to ensure that no significant bugs are
missed during the verification process. The goal is to improve confidence in the product's
functional correctness by using tools and technologies that highlight issues early in the design
process.

Project managers, on the other hand, must balance delivering a working product on time and
within budget while equipping their engineers with the right tools to ensure confidence in the
verification process. One of their most critical responsibilities is deciding when to stop testing,
weighing the cost of finding additional bugs against the value of increased correctness.

Verification Technologies Overview

The chapter introduces various technologies that are used in different EDA (Electronic Design
Automation) tools. A single tool might incorporate multiple technologies to optimize the
verification process. For example, some tools perform “super linting”, which combines
traditional linting with formal verification, while hybrid tools may combine simulation and
formal analysis.

Synopsys Tools

As the author was a Synopsys employee at the time of writing, many tools discussed are from
Synopsys. However, the tools mentioned could also have counterparts from other EDA
companies.

Linting Technology

Linting is one of the verification tools that identify common coding mistakes early on without
running simulations. The term "lint" originated from a UNIX utility for C programming that would
identify questionable or erroneous code constructs. This allowed programmers to find and fix
mistakes efficiently without waiting for runtime errors.

Advantages of Linting

1. Quick Problem Identification: Linting identifies issues like mismatched argument


types or the incorrect number of arguments, as seen in Sample 2-1. These issues would
otherwise lead to runtime errors, but linting can catch them in seconds, making it far
more efficient.

2. No Stimulus Required: Unlike simulations, linting doesn't need stimulus input or


expected output descriptions. It performs static code checks based on the built-in rules
of the linting tool.

3. Early Detection: Linting helps identify problems during the development process rather
than during testing or debugging, which saves time.

Limitations of Linting
12 VLSI TECH WITH ANOUSHKA | VERIFICATION BASICS TO ADVANCE

Despite its advantages, linting has limitations:

1. Static Analysis Only: Linting can only catch certain types of errors based on the
structure of the code. It cannot detect logical issues or problems related to algorithmic
behavior. For example, in Sample 2-3, it cannot determine that an uninitialized variable
might cause unpredictable results.

2. False Positives and Negatives: Linting often reports many false positives, leading to
"alert fatigue" where developers may get frustrated with non-existent issues. On the flip
side, it may miss genuine logical issues that cannot be detected through static analysis.

3. Limited Scope: Linting cannot catch deeper issues, such as race conditions in
concurrent processes, or functional bugs related to data flow or logic errors. These
issues often require simulation or formal methods.

Effective Use of Linting

To make the best use of linting:

• Filter Error Messages: You can reduce frustration and clutter by filtering out known
false positives and focusing on genuine problems. This minimizes the chance of missing
critical errors amidst false alarms.

• Enforce Naming Conventions: A well-structured naming convention can help


automate the filtering process by distinguishing expected behavior from potential errors.
For example, if a signal is named _lat, you can expect a latch and ignore a warning about
inferred latches.

• Run Linting Continuously: Linting should be performed regularly while code is being
written to catch issues early and reduce the risk of overwhelming error reports after a
large amount of code is developed.

Linting for SystemVerilog

Linting is especially useful for SystemVerilog, where it catches errors that are syntactically
correct but might lead to functional problems, such as the counter example shown in Sample 2-
5. The code looks correct and compiles without errors, but the use of a byte type (which is a
signed 8-bit value) causes issues with the condition counter < 255, as the counter will never
reach 255. Linting can immediately flag this problem, allowing for a quick fix without running any
simulations.

Advanced Linting with Formal Methods

Modern linting tools may integrate formal verification techniques to perform more advanced
static checks, such as identifying unreachable states in an FSM or unexecuted code paths.
These advanced linting tools are capable of detecting more subtle issues that go beyond basic
syntax or structural analysis.

Definition and Purpose

1. Simulation as a Verification Technology

o Simulation is a method for verifying hardware designs by creating a virtual model


of the intended system.
13 VLSI TECH WITH ANOUSHKA | VERIFICATION BASICS TO ADVANCE

o It allows designers to identify and correct flaws before manufacturing, saving


time and cost.

2. Not the Final Goal

o Simulation is not the end product. The ultimate aim is to create a functional,
physical design.

3. Approximation of Reality

o Simulations mimic reality but simplify many aspects. They do not capture all
physical characteristics, such as continuous voltage variations or asynchronous
events, and focus on a manageable subset for testing.

Core Concepts in Simulation

1. Stimulus and Response

o Simulations require input stimuli, often provided by a testbench, which


emulates the environment the design will operate in.

o Outputs from the simulation must be validated externally by comparing them


with design intent.

2. Model Execution and Limitations

o A simulation executes a model description, typically written in a hardware


description language like SystemVerilog.

o The simulation's accuracy depends on how well the model reflects the actual
design.

3. Event-Driven Simulation

o Simulation focuses on events—changes in input that drive the execution of


corresponding outputs.

o This approach optimizes performance by avoiding unnecessary computation


when inputs remain constant.

Challenges and Optimization

1. Performance Bottlenecks

o Simulators are slower compared to the real-world physical systems they


emulate due to computational limitations of general-purpose computers.

o Techniques like event-driven simulation aim to reduce unnecessary


computations, e.g., skipping execution when inputs are unchanged.

2. Trade-Offs in Accuracy
14 VLSI TECH WITH ANOUSHKA | VERIFICATION BASICS TO ADVANCE

o Acceleration techniques such as simplifying delay values or reducing the


number of states (e.g., using only logic 0 and 1) improve speed but reduce
simulation accuracy.

Simulation Techniques

1. Event-Driven Simulation

o Changes in inputs trigger the simulation of specific parts of the design.

o If multiple inputs change but result in no output change, the simulator might still
execute those events to maintain logical consistency.

2. Cycle-Based Simulation

o Focuses on significant events, typically driven by clock cycles in synchronous


circuits.

o Optimizes by simulating only relevant outputs, skipping intermediate


combinational signal states.

Applications and Limitations

1. Detecting Flaws

o Simulations allow early identification of design flaws under different input


scenarios.

o Designers can interact with and refine designs iteratively.

2. Physical Reality Simplifications

o Continuous signals in the real world are represented in discrete forms (e.g., 0, 1,
unknown, high-impedance).

o Such approximations may overlook subtle issues that arise in actual hardware.

The Role of the Verification Plan Explained Simply

A verification plan is a critical document in the hardware design process, ensuring that the
design is thoroughly tested and meets all requirements before manufacturing. Here's a
breakdown of its role and importance:

1. Why Verification Plans Are Necessary

The Old Way: Ad-Hoc Verification

• In the past, verification was informal and unstructured:

o Each designer tested their work as they saw fit, often leaving flaws
undiscovered.
15 VLSI TECH WITH ANOUSHKA | VERIFICATION BASICS TO ADVANCE

o Serious problems were identified during system integration, leading to rushed


fixes or costly workarounds.

o Flexible but expensive solutions, like FPGAs, were often used to address design
flaws found later.

Today’s Approach: Structured Verification

• Modern designs are too complex for ad-hoc methods.

• Metrics like code coverage, functional coverage, and bug discovery rates help track
progress but don't define the entire process.

• A clear plan is required to determine when verification is complete and to ensure all
critical aspects are tested.

2. What Is a Verification Plan?

A verification plan outlines:

• What needs to be tested: Based on the design specifications.

• When testing is done: A schedule for completing tests to a defined level of confidence.

• How testing is done: Tools, methods, and testcases required.

Starting Point: The Design Specification

• A design specification must exist before a verification plan can be created.

• It is typically composed of two parts:

1. Architectural Specification: High-level functional requirements of the system.

2. Implementation Specification: Detailed descriptions of how the architecture


will be realized.

• The verification plan begins with the architectural specification and evolves as the
implementation specification is completed.

3. The Specification Document: The “Golden Reference”

• The specification document is the authoritative source for both design and verification.

o Purpose: Resolves disputes about the correct behavior of the design.

o Importance: If there’s ambiguity in the specification, it can lead to errors during


verification.

o Rule: The design implementation must follow the specification. Verification


becomes meaningless if the specification is vague or changes alongside the
design.
16 VLSI TECH WITH ANOUSHKA | VERIFICATION BASICS TO ADVANCE

4. The Verification Plan as a Specification for Testing

• Just like the design has a specification, the verification effort also needs its own plan.

• Verification is often as labor-intensive as the design itself (or even more so), making a
structured plan essential.

• The plan ensures that:

o All critical features are tested.

o Priorities are clear—essential features vs. optional features.

o Schedule pressures don’t lead to missed testing of important features.

5. Ensuring First-Time Success

• A good verification plan defines what "first-time success" means:

o Which features must work correctly from the start?

o Under what conditions must they be tested?

o What are the expected responses from the design?

• Key Principle:

o If a feature or condition isn’t in the plan, it won’t be verified.

o The plan ensures intentional, well-prioritized testing instead of random or


incomplete efforts.

6. Benefits of a Verification Plan

• Predictability: Helps estimate the time, resources, and effort required.

• Clarity: Provides a shared understanding of goals among the team.

• Confidence: Ensures the design is thoroughly tested, reducing risks of failure in the
field.

• Flexibility: Helps manage scope—optional features can be deprioritized under


schedule constraints.

Simplified Explanation of the Text

This section explains how the verification process should be organized, structured, and
implemented to ensure a hardware design meets all its requirements. It emphasizes the
importance of planning, levels of granularity in testing, and the role of the team in creating a
robust design. Let’s break it down.

1. Sticking to the Verification Plan


17 VLSI TECH WITH ANOUSHKA | VERIFICATION BASICS TO ADVANCE

• The Risk of Cutting Off Verification Too Early:

o If verification is stopped prematurely, critical features might be left untested,


leading to market failure.

o A detailed verification plan acts as a "line in the sand", ensuring all essential
tests are completed before the design is shipped.

• Creating a Verification Schedule:

o Based on the plan, you can determine:

▪ How many tests are needed.

▪ How complex they need to be.

▪ How they can be run in parallel for efficiency.

o Rule: The design should only be shipped after passing all tests and meeting
coverage and bug-rate metrics.

• Team Involvement:

o Verification is a team effort. Everyone, including RTL designers, must contribute


to the plan.

o The goal is not just to create RTL (hardware description code) but to deliver a
fully functioning design.

2. Verification Isn’t a New Process

• The approach to creating a verification plan has been used in ultra-reliable systems, like
those from NASA and the aerospace industry, for decades.

• These methods ensure reliability and are applied to both hardware and software
designs.

3. Levels of Verification

Verification is conducted at different levels of granularity. This involves testing designs in


smaller pieces (units, blocks) and larger integrated systems (sub-systems, chips, and boards).

Granularity and Trade-offs:

• Smaller Partitions (Units or Blocks):

o Easier to control and observe during testing.

o Good for verifying specific features or conditions.

• Larger Partitions (Systems or Boards):

o Implicitly test the integration of smaller partitions.

o Harder to control and observe due to complexity.


18 VLSI TECH WITH ANOUSHKA | VERIFICATION BASICS TO ADVANCE

Stable Interfaces Are Key:

• To make progress in testing, the interfaces (connections between design parts) and
functionality of partitions must remain stable.

• Frequent changes to interfaces slow down the process because testbenches (testing
setups) must be updated constantly.

4. Unit-Level Verification

• What Are Design Units?

o Units are small modules, like FIFOs (queues) or DSP datapaths.

o Their functionality and interfaces often change during development.

• Ad-Hoc Testing for Units:

o Units are tested informally by their designers, using simple checks or embedded
assertions.

o These tests ensure basic functionality but don’t require comprehensive or


reusable test environments.

• Why Not Test Units Thoroughly?

o There are too many units in a project, and creating detailed tests for each would
be inefficient.

o Unit-level testing focuses on avoiding syntax errors and ensuring basic


functionality.

5. Block and Core Verification

• What Are Design Blocks?

o Blocks are larger, more stable groupings of units.

o Blocks are the smallest partitions tested independently and thoroughly.

• Reusable Cores:

o Some blocks, like reusable cores, are designed for use in multiple projects.

o These must be verified to ensure they function correctly across different


designs.

• Architecting for Block-Level Testing:

o The design should group related features into blocks for standalone verification.

o This ensures verified blocks work correctly during system-level testing.

• Standardized Interfaces:
19 VLSI TECH WITH ANOUSHKA | VERIFICATION BASICS TO ADVANCE

o Blocks and cores should use standard interfaces to simplify testing and promote
reusability.

o Verification components can then be reused across projects, saving effort.

• Regression Testing for Blocks:

o Blocks require regression tests to ensure functionality remains correct after


modifications.

o Thorough code and functional coverage are necessary because block


functionality is assumed correct at the system level.

You might also like