0% found this document useful (0 votes)
22 views40 pages

EE5530 Lecture8 Concurrency

The document discusses concurrency in SystemVerilog. It explains that interfaces are useful for design reuse and reducing errors, but can be more verbose than direct connections. Concurrency is implemented through always and initial blocks that execute independently but sequentially. The simulator emulates parallelism through time-sharing, adding events to a queue that are executed for small portions of time to appear concurrent.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views40 pages

EE5530 Lecture8 Concurrency

The document discusses concurrency in SystemVerilog. It explains that interfaces are useful for design reuse and reducing errors, but can be more verbose than direct connections. Concurrency is implemented through always and initial blocks that execute independently but sequentially. The simulator emulates parallelism through time-sharing, adding events to a queue that are executed for small portions of time to appear concurrent.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

Lecture 8

Concurrency
in
SystemVerilog
Memory Read

❑What are the user defined data types?


❑ typedef, enum
❑ Structures for Scoreboarding

❑Interfaces
Interface : Advantages
An interface is ideal for design reuse. When two blocks communicate with
a specified protocol using more than two signals, consider using an
interface.

The interface takes the jumble of signals that you declare over and over
in every module or program and puts it in a central location, reducing the
possibility of misconnecting signals.

To add a new signal, you just have to declare it once in the interface, not
in higher-level modules, once again reducing errors.

Modports allow a module to easily tap a subset of signals from an


interface. You can specify signal direction for additional checking.
Interface : Disdvantages

For point-to-point connections, interfaces with modports are almost as verbose as


using ports with lists of signals. Interfaces have the advantage that all the
declarations are still in one central location, reducing the chance for making an
error.

You must now use the interface name in addition to the signal name, possibly
making the modules more verbose.

If you are connecting two design blocks with a unique protocol that will not be
reused, interfaces may be more work than just wiring together the ports.

It is difficult to connect two different interfaces. A new interface (bus_if) may


contain all the signals of an existing one (arb_if), plus new signals (address, data,
etc.). You may have to break out the individual signals and drive them
appropriately.
Concurrency
The fundamental Question

Why hasn’t C been used as a hardware description


language instead of creating Verilog, VHDL,
SystemVerilog and many others?

Because C language lacks three fundamental


concepts necessary to model hardware designs:
connectivity, time and concurrency.
Connectivity,Time and Concurrency

Connectivity is the ability to describe a design using simpler blocks then


connecting them together. Schematic capture tools are perfect examples of
connectivity support.

Time is the ability to represent how the internal state of a design evolves over
time and to control its progression and rate. This concept is different from
execution time which is a simple measure of how long a program runs.

Concurrency is the ability to describe actions that occur at the same time,
independently of each other.
In SystemVerilog

Connectivity in SystemVerilog is implemented by directly instantiating modules


and interfaces within modules, and connecting the pins of the modules and
interfaces to wires or registers.

Time is implemented by using timing control statements such as @ and wait.

Concurrency is implemented through separate always and initial blocks.


Concurrency is described in further detail in the following slides.

The understanding of concurrency is often what separates the experienced


designer from the newcomers.
Concurrent systems are difficult to describe...

Multi-processor machines are relatively easy to build but, they proved much more
difficult to program.

Human beings are adept at performing relatively complex tasks in parallel. But it
seems that we are better at describing a process or following instructions in a
sequential manner.

The description of concurrent systems has evolved into a hybrid approach.


Individual processes running in parallel with each other are themselves described
using sequential instructions.

Every always and initial block, every continuous assignment and every forked
statement in a SystemVerilog model execute in parallel with each other, but
internally each executes sequentially.
Emulating Parallelism on a Sequential Processor

How do you execute a parallel description on a single processor, which is


itself a sequential machine?

During normal day-to-day use, you are very likely to have several windows
open at once, each of them running a different application. The applications
running in all of these windows appear to work all in parallel even though
there is a single sequential processor to execute them.

How is this possible?

Big idea: time-sharing,


Time-sharing

Each application uses the entire processor for small portions of time.

Each application has its turn according to priority and activity.

If the performance of the processor and operating system is high enough, the
interruptions in the execution of a program are below our threshold of detection

It appears as if each program runs smoothly 100% of the time, in parallel with all
the others.

A simulator works using the same principle. Each always and initial block or thread
has the simulation engine for some portion of time. They appear to run in parallel.
Verilog Execution Semantics
Verilog Execution Semantics
Verilog Execution Semantics
Verilog Execution Semantics
Verilog Execution Semantics
Verilog Execution Semantics
Verilog Execution Semantics
Verilog Execution Semantics
Updates trigger other events (1) to be added to active
event queue, but B does not see the new value from A
since B has already evaluated its RHS
Verilog Execution Semantics
Terminology
Event:
Sort of a to-do item for simulator. May include running a bit of Verilog code or updating an object's
value.

Event Queue:
Sort of a to-do list for simulator. It is divided into time slots and time slot regions.

Time Slot:
A section of the event queue in which all events have the same time stamp.

Time Slot Region:


A subdivision of a time slot. There are many of these. Important ones: active, inactive, NBA.

Scheduling:
Determining when an event should execute. The when consists of a time slot and a time slot region.

Update Events:
The changing of an object's value. Will cause *sensitive* objects to be scheduled.
Time Slot Regions
Rationale:
“Do it now!" is too vague. Need to prioritize.

SystemVerilog divides a time slot into 17 regions.


Some Regions

Active Region:
Events that the simulator is currently working on.
Only the current time slot has this region.

Inactive Region:
Contains normally scheduled events. Current and
future time slots have this region.
initial begin

// i-a.0
a = 1;
#3;
// i-a.1
a = 2;
end

initial begin
// i-b.0
b = 10;
1: Verilog puts all initial blocks in t = 0's inactive region. #1;
2: Active region is empty, and so inactive copied to active. // i-b.1
3: Event i-a.0 executes and schedules event c for t = 0 : : : b = a;
: : : and i-a.1 for t = 3. End
4: Event i-a.0 removed from active region (it is now not scheduled anywhere). // c

assign c = a + b;
initial begin

// i-a.0
a = 1;
#3;
// i-a.1
a = 2;
end

initial begin
// i-b.0
b = 10;
5,6: Event i-b.0 executes and schedules i-b.1 for t = 1. #1;
7,8: Since active region is empty, inactive region is bulk-copied to active region. // i-b.1
9: Event c executes. b = a;
10-12: Since all regions in time slot 0 are empty, move to next time slot, t = 1. End
// c

assign c = a + b;
Event Types

Evaluation Event:

Indicates that a piece of code is to be executed or resumed.


Sometimes just referred to as events, or resume events.
All events from the previous event queue example were evaluation events.

Update Event:

Indicates that the value of an object is to be changed.


Update events are created by executing non-blocking assignments.
Event Scheduling

Event Scheduling: The placing of an event in the event queue.

Types of Scheduling

Initially Scheduled Events


Events scheduled when simulation starts, such as for initial blocks.

Time-Delay Scheduled Events


Events scheduled when execution reaches a time delay.

Sensitivity-List Scheduled Events


Events scheduled when certain object values change.

Non-Blocking Assignment (NBA) Scheduled Update Events


Update events scheduled when a non-blocking assignment is reached.
Event Scheduling
Time-Delay Scheduled

When a delay control, e.g. #12, is encountered….


……schedule (put) a resume event in inactive region…
…… of future (t+delay) time step.

Time-Delay Scheduled Example:

b++;
// Label L1

#4; // Schedule resume event for L2 at time t+4.

// Label L2;
a = b;
Sensitivity List Scheduled
When an object in a sensitivity list changes….
…. schedule a resume or check-and-resume event for code associated with sensitivity list….
……in the inactive region of current time step.

Explicit Event Examples:

// Label: L0
@( x ); // Put a check-@-condition event in sensitivity list of x ..
// .. event will resume at L1 if condition satisfied ..
// .. meaning any change in x.
// Label: L1

@( posedge clk ); // Put a check-@-condition event in sensitivity list of clk ..


// .. event will resume at L2 if condition satisfied ..
// .. meaning 0->1 transition.
// Label: L2

wait( stop_raining ); // Put a check-wait-condition event in sensitivity list of stop_raining ..


// .. event wil resume at L3 if stop_raining != 0.
// Label: L3
always_comb or assign

Live-In of always comb or always @*:

always_comb x = a + b; // Put an execute event in sensitivity list of a and b.

always_comb begin // Put an execute event in sensitivity list of ..

y = d + e; // d, e, and f, BUT NOT y. (y is not live in).


z = y + f;

end

Continuous assignment:

assign x = a + b; // Put an execute event in sensitivity list of a and b.


NBA Scheduled Update Events

When a non-blocking assignment is executed : : :


: : : the value of the right-hand-side is computed : : :
: : : and an update event is scheduled in the NBA region of the current time step : : :
: : : when the update event executes the left-hand-side variable is updated with the value.

always_comb begin
y <= a + b; // Schedule an update-y event in NBA region, keep executing.
e = y + g; // Uses old y.
end
Example: Non-blocking assignments
Show the state of the event queue for the module below : : :
: : : starting at t = 10 and given the external events described below.

module misc #( int n = 8 )


(output logic [n-1:0] a, g, e, input uwire [n-1:0] b, c, j, f, input
uwire clk);

logic [n-1:0] z;
always_ff @( posedge clk ) begin // Label: alf
a <= b + c;
z = a + j;
g = z;
end
always_comb // Label: alc
e = a * f;
endmodule
Example's Sensitivity Lists and Update Events

always_ff @(posedge clk)


Sensitivity List For Example Code begin // Label: alf
a <= b + c;
clk: Due to @(posedge clk). If 0 -> 1 schedule alf. z = a + j;
a: Due to always_comb. Any change, schedule alc. g = z;
f: Due to always_comb. Any change, schedule alc. end

Update Events For Example Code always_comb // Label: alc


e = a * f;
Execution of a <= b + c

will result in scheduling an update event (for a).


Conditions and External Events

Example Problem Assumptions: always_ff @(posedge clk)


begin // Label: alf
Queue is initially empty at t = 10. a <= b + c;
z = a + j;
At t = 10 j changes. g = z;
end
At t = 12 clk changes from 0 to 1.
always_comb // Label: alc
At t = 14 f changes. e = a * f;
Event Queue Changes

Step 1: Queue is empty.


always_ff @(posedge clk)
Step 2: At t = 10 j changes.
begin // Label: alf
Step 3: No change, because j is not in a sensitivity list. a <= b + c;
z = a + j;
g = z;
end

always_comb // Label: alc


e = a * f;
Event Queue Changes
always_ff @(posedge clk)
Step 5: At t = 12 clk changes from 0 to 1
begin // Label: alf
scheduling alf in inactive region in Step 6. a <= b + c;
z = a + j;
Step 7: Since active region empty inactive copied to active. g = z;
end
Step 8: alf starts execution.
always_comb // Label: alc
Step 10: Execution of a<=b+c e = a * f;
results in scheduling Upd-a in NBA region.
Event Queue Changes
always_ff @(posedge clk)
begin // Label: alf
Step 11: alf finishes leaving active region empty. a <= b + c;
z = a + j;
g = z;
Step 13: Next non-empty region, NBA, copied to active region.
end

Step 14-16: Upd-a causes alc to be scheduled. always_comb // Label: alc


e = a * f;
Step 17-20: alc moved to active region, runs, finishes.
Event Queue Changes
always_ff @(posedge clk)
begin // Label: alf
a <= b + c;
Step 22: f changes, scheduling alc.
z = a + j;
g = z;
Step 23-26: alc moved to active region, executes finishes. end

Step 27: If nothing else happens simulation ends. always_comb // Label: alc
e = a * f;
Delta Cycles
As threads are simulated and new values are assigned after zero delays, the state of the
simulation evolves and progresses, but time does not advance. Since time does not advance,
but the state of the simulation evolves, these zero-delay cycles where threads are evaluated
and zero-delay nonblocking values are assigned are called delta-cycles. The simulation
progresses first along the delta axis then along the real-time axis.

You might also like