0% found this document useful (0 votes)
34 views

SoC Notes

Logic synthesis is a process that converts an abstract specification of desired circuit behavior into a logic gate implementation. It bridges high-level synthesis and physical design by taking a register-transfer level design and producing a gate-level netlist. The process helps optimize and implement digital circuit designs.

Uploaded by

abha maurya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views

SoC Notes

Logic synthesis is a process that converts an abstract specification of desired circuit behavior into a logic gate implementation. It bridges high-level synthesis and physical design by taking a register-transfer level design and producing a gate-level netlist. The process helps optimize and implement digital circuit designs.

Uploaded by

abha maurya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

1. What is mean by logic synthesis?

Logic Synthesis is a process in computer engineering where an abstract


specification of desired circuit behavior, typically at register transfer level
(RTL), is turned into a design implementation in terms of logic gates1. This
process is typically carried out by a computer program called a synthesis tool1.
If a digital design is at the Register-Transfer-Level, logic synthesis then
converts it into Gate-Level implementation.
The process of logic synthesis helps in bridging the gap between high-level
synthesis and physical design automation2. It has played a significant role in the
evolution of digital electronics, from manual design and optimization of
electronic circuits to computer-aided design and logic minimization.

 What are the disadvantages of SoPC over Embedded Systems?

System on a Programmable Chip (SoPC) has several disadvantages compared to


traditional Embedded Systems:

i. Lower Maximum Processor Performance: SoPC designs may not achieve


the same level of processor performance as dedicated embedded
systems1.
ii. Higher Unit Costs in Production: The unit cost of SoPCs can be higher,
especially when produced in large quantities1.
iii. High Power Consumption: SoPCs can consume more power compared to
traditional embedded systems1.
iv. Time-Consuming Design Process: The design process of SoPC can take
six to twelve months, which can be longer than that for traditional
embedded systems2.
v. Limited Visibility: The visibility of SoPC is limited, which can make
debugging and system optimization more challenging.

1|Page
 What is meant by multi-core SoCs?
Multi-core Systems on a Chip (SoCs) are integrated circuits that contain
multiple Central Processing Unit (CPU) cores. These cores can be a
combination of general-purpose CPUs, Digital Signal Processors (DSPs), and
application-specific cores.
In a typical multi-core SoC system, a single physical chip integrates various
components together. This single chip may contain digital, analog, mixed-
signal, and often radio-frequency functions. Each individual core can run at a
lower speed, which reduces overall power consumption as well as heat
generation.
The rise of multi-core SoCs is a response to the embedded software industry’s
persistent demand for more processing power. They allow for the consolidation
of various systems that previously required individual devices or different
systems running on separate devices5. However, managing different software
components on a single SoC can be challenging and requires effective software
tools and practices.

 List four applications of reconfigurable devices.

Sure, here are four key applications of reconfigurable devices:

1. Signal Processing: These devices can be tailored for specific signal


processing tasks, providing both speed and adaptability.
2. Cryptography: Reconfigurable devices are used in cryptographic
systems where the reprogrammable ability offers various benefits over
ASICs implementations.
3. Machine Learning and Artificial Intelligence: They are used in
machine learning and AI applications, where tasks require both speed and
adaptability.
4. Data Centers and Embedded Devices: Companies like Intel and
Microsoft use reconfigurable platforms to construct next-generation
heterogeneous accelerators for data centers and embedded devices.

2|Page
 List the: advantages and disadvantages of SoC-based Design.
Advantages of SoC-based Design:
1. Compact Size: SoC design integrates multiple components onto a single
chip, resulting in a considerably smaller footprint1.
2. Power Efficiency: SoC designs are inherently more power-efficient due to
their small sizes and improved component interactions.
3. Cost-Effectiveness: The integration of multiple components onto a single
chip can be more cost-effective1.
4. Ideal for Mobile Devices: The smaller physical dimensions and power
efficiency of SoCs make them suitable for portable or mobile consumer
electronic devices.
Disadvantages of SoC-based Design:
1. Increased Complexity: The integration of multiple components onto a
single chip can lead to increased complexity.
2. Lack of Modularity: Unlike traditional systems with separate chips for
each component, SoCs lack modularity.
3. Security Challenges: SoCs can pose potential security challenges.

 What is meant by technology mapping?


Technology mapping, also known as technology roadmapping, is a method used
to generate technology roadmaps to support decision-making processes and
strategic planning in an organization1. It is an effective tool that visualizes and
communicates required messages to company management regarding current
patterns of technology1.
In the context of digital system design, technology mapping refers to a design
step during logic-level synthesis2. Given a technology-independent multi-level
logic structure, a cell library, and a set of design constraints, technology
mapping is the process of transforming the multi-level logic structure into a
netlist of library cells which represents the given multi-level logic structure and
fulfills the design constraints.

3|Page
 difference between Domain and Application-specific processors.

Domain-Specific Processors and Application-Specific Processors are both


types of embedded processors, but they differ in their design and usage:

1. Domain-Specific Processors: These are general-purpose processors


(GPPs) that are designed to execute a wide range of tasks. They are used in
devices that require high-performance processing capabilities. GPPs have a
large and complex instruction set, allowing them to perform a wide range of
tasks1.

2. Application-Specific Processors: Also known as application-specific


embedded processors, these are microprocessors designed to efficiently handle
one or a limited set of applications2. They bridge the gap between GPPs and
Application-Specific Integrated Circuits (ASICs) by providing improved power-
performance efficiency within a familiar software programming environment3.
They are used in a wide range of applications, including automotive systems,
industrial control systems, and consumer electronics1.

 What is meant by BIST-in VLSI Design?

Built-In Self-Test (BIST) in VLSI design refers to a design-for-testability


technique that places the testing functions physically with the circuit under test
(CUT)1. BIST is a structural test method that adds logic to an Integrated Circuit
(IC) which allows the IC to periodically test its own operation2.
There are two major types of BIST:
1. Memory BIST (MBIST): It generates patterns to the memory and reads
them to log any defects2. MBIST also consists of a repair and redundancy
capability2. The main feature of MBIST is the capability to test memory
through an in-built algorithm1.
2. Logic BIST (LBIST): It uses a Pseudo-Random Pattern Generator to
generate input patterns that are applied to internal scan chains2. The results are
compressed into a signature, and a Multi-Input Signature Register determines
whether the signature is correct or not to tell if all tests passed2.
BIST is considered as one of the most promising solutions for memory testing1.
It has advantages such as low cost, at-speed testing, and easy memory access for

4|Page
testing1. However, it requires a very clean design with no unknown states, as
this would corrupt operation2. This means much more stringent design and test
rules and insertion of LBIST is more complex than scan2. BIST also incurs
significant overhead in timing, area, and power.

Draw the architecture of the boundary scan test.

Explore

1. The architecture of the boundary scan test, also known as JTAG (Joint Test Action
Group), is designed to test interconnects on printed circuit boards (PCBs) or sub-
blocks inside an integrated circuit1. Here’s a brief overview:
2. Boundary Scan Cells: Each pin on the device is connected to a boundary scan cell,
which includes a multiplexer and latches2. These cells can selectively override the
functionality of the pin1.
3. Boundary Scan Register (BSR): The boundary scan cells are connected together to
form the external boundary scan shift register (BSR)1.
4. Test Access Port (TAP) Controller: The BSR is combined with a JTAG Test Access
Port (TAP) controller, which comprises four (or sometimes more) additional pins plus
control circuitry1.
5. Testing Interconnects: The boundary scan architecture allows testing of interconnects
(including clusters of logic, memories, etc.) without using physical test probes1. Each
test cell can be programmed via the JTAG scan chain to drive a signal onto a pin and
across an individual trace on the board1.
6. Normal and Test Modes: For normal operation, the added boundary scan latch cells
are set so that they have no effect on the circuit, and are therefore effectively
invisible. However, when the circuit is set into a test mode, the latches enable a data
stream to be shifted from one latch into the next1.

5|Page
7. This architecture allows for efficient testing at the board level, and is now mostly
synonymous with JTAG1.

 , What are goal and objectives of-routing?

The goals and objectives of routing in a network are as follows:

Goals of Routing:

1. Efficiency: The primary goal of routing is to deliver packets from the source to the
destination in the most efficient manner possible.
2. Scalability: As networks grow, the routing process should be able to handle the
increased traffic and more complex network topologies.
3. Reliability: Routing should ensure that data transmission continues even in the event
of network failures by finding alternate paths.

Objectives of Routing Protocols:

1. Optimal Routing: To ensure the most efficient path is chosen for data packet
transmission.
2. Stability: To maintain consistent and reliable network performance.
3. Ease of Use: To facilitate straightforward network management and operation.
4. Flexibility: To adapt to changes in network topology and traffic patterns.
5. Rapid Convergence: To quickly update routing information in response to network
changes.

These goals and objectives guide the process of routing, ensuring efficient and reliable data
transmission across networks.

6|Page
7|Page
 List the different types of routing algorithms and explain anyone
algorithm in detail.
Routing algorithms can be broadly classified into three categories12:
1. Adaptive Algorithms: These algorithms change their routing decisions
whenever network topology or traffic load changes. They are also known as
dynamic routing algorithms1.
2. Non-Adaptive Algorithms: These algorithms do not change their routing
decisions once they have been selected. This is also known as static routing1.
3. Hybrid Algorithms: These algorithms are a combination of both adaptive
and non-adaptive algorithms3.
Let’s delve into the details of one of these algorithms, the Adaptive Algorithm:
Adaptive algorithms, also known as dynamic routing algorithms, make routing
decisions based on the current network topology and traffic load1. They use
dynamic information such as current topology, load, delay, etc., to select
routes1. The optimization parameters are distance, number of hops, and
estimated transit time1.
Adaptive algorithms can be further classified into three types1:
• Isolated: In this method, each node makes its routing decisions using the
information it has without seeking information from other nodes1.
• Centralized: In this method, a centralized node has entire information
about the network and makes all the routing decisions1.
• Distributed: In this method, the node receives information from its
neighbors and then takes the decision about routing the packets1.
For example, in the Distributed Adaptive Algorithm, each node receives
information from its neighbors and then decides on routing the packets1. A
disadvantage is that the packet may be delayed if there is a change in between
intervals in which it receives information and sends packets1. It is also known
as a decentralized algorithm as it computes the least-cost path between source
and destination1. This type of algorithm is beneficial in large networks where
the network topology can change frequently.

8|Page
 Explain in detail AMBA bus architecture AHB, APB, and AXI.

AMBA Bus Architecture

The Advanced Microcontroller Bus Architecture (AMBA) is an open-standard


developed by ARM, which outlines how to connect and manage the different
components or blocks within a System-on-Chip (SoC)12.
1. Advanced High-Performance Bus (AHB): AHB is designed for high-
performance, high-frequency components. This includes the connections of
processors, on-chip memories, and memory interfaces among others1. AHB
supports multiple masters and multiple slaves and has wider bandwidth3. It was
added to accommodate high-performance designs with features like split
transactions, single-cycle bus master handover, single-clock-edge operation, and
wider data bus configurations1.
2. Advanced Peripheral Bus (APB): APB is designed for low bandwidth
peripherals that do not require the high performance of the AHB1. It is a simple
non-pipelined protocol that can be used to communicate (read or write) from a
bridge/master to a number of slaves through the shared bus4. These include
components like a UART, low-frequency GPIO, and timers1.
3. Advanced eXtensible Interface (AXI): AXI offers even higher
performance than the AHB, implemented through a point-to-point connection
scheme1. Instead of a system bus, the AXI interconnect allows transactions
between masters and slaves using only a few well-defined interfaces1. It is
suitable for high-speed sub-micrometer interconnect and supports features like
separate address/control and data phases, and support for unaligned data
transfers using byte strobes2.
These buses are part of the AMBA specification and are widely used in SoC
designs due to their well-documented nature and royalty-free use2.

9|Page
 What is ATPG? List the few algorithms for ATPG

Automatic Test Pattern Generation (ATPG) is an electronic design automation


method used to find an input or test sequence that, when applied to a digital
circuit, enables automatic test equipment to distinguish between the correct
circuit behavior and the faulty circuit behavior caused by defects123. The
generated patterns are used to test semiconductor devices after manufacture, or
to assist with determining the cause of failure12.
Here are a few algorithms used in ATPG:
1. Roth’s D-Algorithm (D-ALG): This algorithm defined the calculus and
algorithms for ATPG using D-cubes4.
2. Goel’s PODEM: This algorithm used path propagation constraints to
limit the ATPG search space and introduced backtrace4.
3. Fujiwara’s FAN: This algorithm efficiently constrained the backtrace to
speed up search and further limited the search space4.

Write the difference between faults and fault coverage.

Faults vs Fault Coverage

 Faults: In the context of digital systems, a fault refers to a defect or malfunction in a


component of the system. This could be due to various reasons such as design errors,
manufacturing defects, or environmental conditions. Faults can cause the system to
deviate from its expected behavior.
 Fault Coverage: Fault coverage is a measure of the effectiveness of a test in detecting
faults in a system. It is usually expressed as a percentage and calculated as the ratio of
the number of faults detected by a test to the total number of faults in the system. A
higher fault coverage indicates a more effective test.

Sure, here’s the difference between faults and fault coverage in a list format:

10 | P a g e
1. Faults:
o
Refers to defects or malfunctions in a component of a digital system.
o
Can be due to design errors, manufacturing defects, or environmental
conditions.
o Causes the system to deviate from its expected behavior.
2. Fault Coverage:
o A measure of the effectiveness of a test in detecting faults.
o Expressed as a percentage.
o Calculated as the ratio of the number of faults detected by a test to the total
number of faults in the system.
o A higher fault coverage indicates a more effective test.

Explain with the necessary diagram about the hardware units required for Video Streaming.

Sure, here’s a simplified diagram of the hardware units required for video streaming:

+-------------------+ +-------------------+ +-------------------+


| | | | | |
| Video Source | --> | Capture Card | --> | Computer |
| (Camera, Console) | | (Elgato 4K60 Pro) | | (CPU, GPU, RAM) |
| | | | | |
+-------------------+ +-------------------+ +-------------------+

1. Video Source: This could be a local screen capture, another PC or gaming console, a
webcam, or an HDMI camera12.
2. Capture Card: This device converts the raw signal from your video source and
prepares it for processing by a computer. Note that you don’t need a capture card

11 | P a g e
when streaming with a webcam or a smartphone2. An example of a capture card is the
Elgato 4K60 Pro Mk 2 HDMI capture card1.
3. Computer: The computer is where the video stream is encoded and broadcasted. A
fairly powerful CPU or a video card with NVENC support is the key piece to handle
encoding the video stream1. A sample computer configuration for live stream
beginners could be a 4th generation Intel Core i5 (or its AMD equivalent), 8GB of
RAM, and a DirectX 10.1 compatible GPU23.

Remember, you also need an internet connection with good upload bandwidth to ensure the
data gets out to your viewers smoothly1.

What is the need for IP-based design? Explain with a block diagram the IP-based SoC design for
Mobile Phones.

Need for IP-based Design

IP-based design has become increasingly important due to several reasons:

1. Quick Product Development: As consumer demands require rapid product


development, IP design allows for faster design cycles1.
2. Complex Ecosystem: The ecosystem has grown complex to accommodate internal
and external IP. The introduction of SoC architectures has made IP design even more
significant1.
3. Risk Management: IP is critical to the overall SoC design, which is continually
evolving. A structured IP design approach is necessary to manage the risk associated
with SoC design1.
4. Design Reuse: IP-based design allows for design reuse, which can result in reduced
engineering effort, faster time to market, and reduced development costs 2.
5. Microprocessor Design: In the microprocessor world, where a chip consists entirely
of one or more CPUs and cache, designing the processor as reusable IP makes more
sense than starting from scratch each time3.

IP-based SoC Design for Mobile Phones

In an IP-based SoC design for mobile phones, various IP cores are integrated to form a
complete system. These IP cores can include components such as a CPU, GPU, memory
controller, peripheral interfaces (like USB, UART), and wireless communication modules
(like Wi-Fi, Bluetooth).

Here’s a simplified block diagram of an IP-based SoC design for mobile phones:

+--------------------------------------------------+
| |
| +------+ +------+ +------+ +------+ |
| | | | | | | | | |
| | CPU | | GPU | |Memory| |Wi-Fi | |
| | IP | | IP | |Controller| | IP | |
| | | | | | IP | | | |
| +------+ +------+ +------+ +------+ |
| |

12 | P a g e
| +------+ +------+ +------+ +------+ |
| | | | | | | | | |
| |Bluetooth| | USB | | UART | |Other | |
| | IP | | IP | | IP | | IPs | |
| | | | | | | |
Each IP core performs a specific function and communicates with other IP cores through a system
interconnect. This modular approach allows for flexibility in design, as IP cores can be added,
removed, or replaced based on the requirements of the mobile phone4

why SoC are performance-oriented designs

System on Chip (SoC) designs are performance-oriented due to several reasons:

1. Integration: SoCs integrate all components of a computer or other electronic systems


into a single chip, including processing, memory, communication, and input/output
(I/O) functions1. This high degree of integration leads to efficient design and
improved performance1.
2. Compact Size: By integrating all system components onto a single chip, SoCs
significantly reduce the size of electronic devices1.
3. Low Power Consumption: SoCs can operate at lower power levels due to their high
degree of integration and efficient design1.
4. Cost-effectiveness: SoCs can be more cost-effective than traditional ICs due to their
high degree of integration and the use of standard cell-based design techniques1.
5. Flexibility: SoCs can be customized to meet the specific requirements of different
electronic systems, making them highly flexible and adaptable1.
6. Reliability: SoCs offer improved reliability due to their high level of integration and
the use of advanced testing and validation techniques during the design process 1.

13 | P a g e
7. Advanced Features: SoCs enable the integration of advanced features such as
artificial intelligence (AI) and machine learning capabilities, enabling the
development of intelligent electronic systems1.
8. Reuse of Design Elements: SoCs allow for the reuse of design elements and
Intellectual Property (IP) blocks, which can help to reduce design costs and time-to-
market1.
9. Design Tools and Methodologies: Specialized design tools and methodologies have
been developed specifically for the design and manufacture of SoCs, which can help
to streamline the design process and improve design efficiency1.

In summary, the performance-oriented design of SoCs is driven by the need to integrate


multiple functions into a single chip, leading to improvements in size, power consumption,
cost, flexibility, reliability, and the ability to incorporate advanced features.

compare FPGA over ASIC design

FPGA vs ASIC Design

Field Programmable Gate Arrays (FPGAs) and Application Specific Integrated Circuits
(ASICs) are both types of integrated circuits, but they differ in various ways 12:

1. Functionality and Flexibility:


o FPGA: It is a reprogrammable chip with a collection of logic gates. It can be
programmed by the user to capture the logic1. This makes FPGAs highly flexible1.
o ASIC: It is customized for a specific application’s need and is not reprogrammable1.
This makes ASICs less flexible1.
2. Performance and Efficiency:
o FPGA: It has limited performance1.
o ASIC: It provides improved speed and efficiency1. ASICs are notably more efficient
than FPGAs2.
3. Volume Production and Cost:
o FPGA: It is cheaper for small volumes as there is no need to pay for fabrication1.
However, FPGAs are quite expensive for large volumes of data1.
o ASIC: It is suited for bulk production and tends to be more cost-effective in large
quantities12. However, ASICs have high initial development costs1.
4. Design Complexity:
o FPGA: The design flow of FPGAs is simple and faster3.
o ASIC: ASIC design flow is much more complex and design-intensive3.

14 | P a g e
In summary, the choice between FPGA and ASIC depends on the specific requirements of
the application, including factors such as functionality, performance, volume of production,
cost, and design complexity123.

what do you mean by hard ip and soft ip

Hard IP and Soft IP are terms used in the context of FPGA design12:

 Hard IP: These are circuitry that is hard-wired and etched into silicon to perform a
specific function1. They are efficient and fast, and include components like
processors, DSP blocks, and high-speed transceivers1. Hard IPs are offered as layout
designs in a layout format like GDS, which is mapped to a process technology and
can be directly dropped by a consumer to the final layout of the chip2. They cannot be
customized for different process technologies2.
 Soft IP: These are made from the generic logic fabric (LUTs, logic blocks, etc.) in the
FPGA1. Soft IPs are generally offered as synthesizable RTL models2. These are
developed in one of the Hardware description language like SystemVerilog or
VHDL2. Sometimes IP cores are also synthesized and provided as generic gate level
netlist which can be then mapped to any process technologies2. The advantage of Soft
IP cores is that they can be customized in the back end Placement and Routing flow
by a consumer to map to any process technologies2.

In summary, the main difference between Hard IP and Soft IP lies in their flexibility and
customization. Hard IPs are specific, efficient, and fast but lack customization. On the other
hand, Soft IPs offer flexibility and customization but may not be as efficient as Hard IPs 12.

what is reconfigurable processor

A Reconfigurable Processor is a type of processor that combines the speed of Application


Specific Integrated Circuits (ASIC) and the universality of classical digital processors 1. It is
designed to take advantage of some form of run-time (dynamically) configurable hardware to
provide adaptive instruction set modification in order to meet application requirements 2.

Reconfigurable processors offer a more adaptive option: reconfiguring the most efficient
hardware resources to process the data as needed3. This key feature allows computations to
be performed in hardware to increase performance, while retaining much of the flexibility of
a software solution4.

The principal difference when compared to using ordinary microprocessors is the ability to
add custom computational blocks using Field-Programmable Gate Arrays (FPGAs)5. On the
other hand, the main difference from custom hardware, i.e., ASICs, is the possibility to adapt
the hardware during runtime by “loading” a new circuit on the reconfigurable fabric5. This

15 | P a g e
provides new computational blocks without the need to manufacture and add new chips to the
existing system5.

In summary, reconfigurable processors are intended to fill the gap between hardware and
software, achieving potentially much higher performance than software, while maintaining a
higher level of flexibility than hardware4.

what is meant by multicore SoCs

Multicore System-on-Chip (SoC) refers to a system that integrates multiple CPU cores and
possibly several Application Specific Integrated Circuits (ASICs) on a single physical chip 1.
Each individual core can run at a lower speed, which reduces overall power consumption as
well as heat generation2.

These SoCs can house entire systems by accommodating multiple processor cores on a single
piece of hardware3. The multiple cores may differ from one another (heterogeneous),
allowing for the consolidation of various systems that previously required individual devices
or different systems running on separate devices3.

In complex SoCs like those used in networking or automobile applications, primary cores
(booting cores), application/system cores, and networking cores are integrated on a single die
to handle various data from different peripherals4.

The true promise of this technological innovation lies in the fact that these multiple cores
may differ from one another (heterogeneous). This has brought the concept of mixed-safety-
criticality to the forefront, which is both a safe domain and a non-safe domain running on a
single SoC3.

In terms of performance, multicore SoCs perform better for several embedded applications 1.
They have advanced in terms of their peripherals, interfaces, processing prowess, various on-
chip resources, and a simplified way to manage data flow and communication among
resources5.

list out the factor considered for the SoC design

The factors considered for System-on-Chip (SoC) design include:

16 | P a g e
1. Target Application: The specific use-case or application for which the SoC is being designed1.
2. Power Efficiency: The SoC should be designed to consume as little power as possible while
delivering the required performance1.
3. Performance: The SoC should meet the performance requirements of the target application1.
4. Connectivity: The SoC should have the necessary interfaces to connect with other
components in the system1.
5. System Architecture: The overall structure and interconnection of the SoC components2.
6. Logic Design: The design of the digital logic circuits within the SoC2.
7. Verification: Ensuring that the SoC design meets the specified requirements2.
8. Physical Design: The layout of the SoC, including the placement of components and routing
of interconnections2.
9. Testing: Checking the SoC for defects and ensuring it functions as expected2.
10. Reliability: The SoC should be reliable and function correctly over its expected lifetime3.
11. Functional Safety (FuSa): The SoC should have mechanisms to handle failures and prevent
them from causing unacceptable harm3.
12. Quality: The SoC should meet certain quality standards 3.
13. Security: The SoC should have features to protect against security threats3.

define controllability and observability

Controllability and Observability are fundamental concepts in control systems1:

1. Controllability: It is the ability to control the state of the system by applying specific
input2. A system is controllable if it can be transferred to another desired state over a
finite time period by using input 3. The controllability of the system can be checked
using the Kalman Test 2. The condition for controllability is given by:

Q_0 = [B, AB, A^2B, ..., A^{n-1}B]

If the determinant of

Q_0

is not equal to 0, then the system is controllable2.

2. Observability: It is the ability to measure or observe the system’s state 2. If the


internal state of the system can be determined using the input and output signals
during a finite interval of time, then the system is said to be observable2. The
observability of the system can also be checked using the Kalman Test 2. The
condition for observability is given by:

Q_0 = [C^T, A^TC^T, ..., (A^T)^{n-1}C^T]

If the determinant of

17 | P a g e
Q_0

is not equal to 0, then the system is observable2.

compare testing and verification

Testing vs Verification

 Testing: This refers to the process of executing an application or program with the
intent of detecting potential software bugs1. It involves running the software and
performing various testing methodologies to ensure that it functions as intended 2.
Testing can either be done manually by the programmers or automated using some
tools and scripts1. The goal of testing is to ensure that the software does not fail to
meet the expectations of the end users1.
 Verification: This is the process of checking that software achieves its goal without
any bugs3. It includes activities that don’t require executing the software, such as
reviewing design documents, requirement specifications, and code walkthroughs 2.
Verification ensures that the code logic of the software application is in line with the
specification4. The objective of verification is to answer the question “Are we
developing the software according to the specification and requirements?” 1.

In summary, while testing is a dynamic process that involves executing the software to find
defects, verification is a static process of inspecting the documents and not the actual
software itself3. Verification is usually performed before testing3.

explain in brief about ATPG

Automatic Test Pattern Generation (ATPG) is an electronic design automation method


used to find an input or test sequence. When applied to a digital circuit, ATPG enables
automatic test equipment to distinguish between the correct circuit behavior and the faulty
circuit behavior caused by defects1.

The ATPG process for a targeted fault consists of two phases:

1. Fault activation: This establishes a signal value at the fault model site that is opposite of the
value produced by the fault model1.
2. Fault propagation: This moves the resulting signal value, or fault effect, forward by
sensitizing a path from the fault site to a primary output1.

ATPG can fail to find a test for a particular fault in at least two cases1:

18 | P a g e
1. The fault may be intrinsically undetectable, such that no patterns exist that can detect that
particular fault1.
2. A detection pattern exists, but the algorithm cannot find one1.

The effectiveness of ATPG is measured by the number of modeled defects, or fault models,
detectable and by the number of generated patterns. These metrics generally indicate test
quality (higher with more fault detections) and test application time (higher with more
patterns)1.

draw the hardware unit required in an SoC design for smart mobile phones . explain the steps

I’ll try to create that.

The System on Chip (SoC) design for smart mobile phones includes several key
components12:

1. Central Processing Unit (CPU): The “brains” of the SoC, it runs most of the code for the
operating system and most of your apps.
2. Graphics Processing Unit (GPU): Handles graphics-related tasks, such as visualizing an app’s
user interface and 2D/3D gaming.
3. Memory: Used in SoC for storage. It may be a volatile (RAM) or non-volatile memory (ROM).
4. Image Processing Unit (ISP): Converts data from the phone’s camera into image and video
files.
5. Digital Signal Processor (DSP): Handles more mathematically intensive functions than a CPU.
6. Neural Processing Unit (NPU): Used in high-end smartphones to accelerate machine learning
(AI) tasks.
7. Video encoder/decoder: Handles the power-efficient conversion of video files and formats.
8. Modems: Converts wireless signals into data your phone understands.
9. Peripheral devices: Externally connected devices/interfaces such as USB, HDMI, Wi-Fi, and
Bluetooth.

The SoC design process encompasses several crucial stages3:

1. Architecture Design: Following the establishment of specifications, architects create a high-


level design outlining CPU core arrangement, memory hierarchy, buses, and interfaces.
Choices made here significantly impact performance, power efficiency, and overall
functionality.

write a note on SoC verification

System-on-Chip (SoC) Verification

19 | P a g e
SoC verification is a crucial process in the design and development of a System-on-Chip
(SoC), a complete system integrated into a single chip 1. This process tests the system design
and functionality, ensuring that specifications and features are correctly implemented 2. On
average, the verification phase consumes more than 70% of the SoC development cycle 2.

The SoC verification flow involves several steps1:

1. Feature Extractions: This step involves understanding the design at the top level.
Features like power management, signal processing, and connectivity are identified
and need to be verified individually and in conjunction with others1.
2. SoC Level Verification Plan: It’s important to distinguish between functionalities
that need to be verified at the SoC level and at the IP level. This distinction helps
ensure that each component is functioning correctly both independently and as part of
the larger system1.
3. Verify Interconnections: At the SoC level, the focus is on verifying the
intercommunication between the sub-blocks. This ensures that the different
components of the SoC can effectively communicate and work together 1.
4. Keep Placeholders for Updates: Sometimes, all features of the SoC are not defined
in the initial phase. For such features, update the verification plan at a later stage 1.

Given the complexity of modern SoCs, a standard and proven verification flow is essential,
involving extensive verification at every level, from block to IP to Sub-system to SoC3. This
process is vital to ensure the functional correctness of a SoC design4.

write a note on partial scan techniques for testbelity

Partial Scan Techniques for Testability

Partial scan is a Design for Testability (DFT) technique that scans only a subset of the flip-
flops (FFs) of a sequential circuit 1. This method attempts to provide a trade-off between the
testability of the circuit and the overheads (area and delays) introduced by scan design 1.

The selection of FFs to scan differs among various partial-scan techniques1. Some methods
based on testability analysis use controllability, observability, and sequential depth as
measures of circuit testability2. Flip-flops with poor controllability measures are selected for
scan2.

Partial scan has several advantages:

 It has lower overheads (area and delay) compared to full scan3.


 It allows for reduced test length3.
 It permits limited violations of scan design rules. For example, a flip-flop on a critical path
may not be scanned3.

However, it’s important to note that while partial scan can improve testability and reduce
overheads, it may not be as thorough as full scan techniques4. Therefore, the choice between

20 | P a g e
partial and full scan depends on the specific requirements and constraints of the design
project5.

write a note onbilbo based bist architecture

BILBO Based BIST Architecture

The Built-In Logic Block Observer (BILBO) is a widely used Built-In Self-Test (BIST)
architecture1. It is an embedded and offline BIST architecture that uses existing flip-flops
from the Circuit Under Test (CUT) to construct the Test Pattern Generator (TPG) and Output
Response Analyzer (ORA) functions1.

The BILBO architecture partitions the CUT into groups of flip-flops and groups of
combinational logic1. Each group of flip-flops is augmented with additional logic to provide
multiple modes of operation1. When the BILBO functions as a TPG, it provides pseudo-
random test patterns by operating as a Linear Feedback Shift Register (LFSR). When the
BILBO functions as an ORA, it performs multiple-input signature analysis by operating as a
Multiple Input Signature Register (MISR) 1.

The BILBO is a test-per-clock BIST approach since a new test pattern is applied to the CUT
and a new output response is compacted during each clock cycle of the BIST sequence 1. The
additional gates added to the existing flip-flops to create the BILBO are one exclusive-OR
gate, one AND gate, and one NOR gate for each flip-flop1. For each BILBO, one multiplexer,
one inverter, and one or more exclusive-OR gates (needed to construct a primitive
characteristic polynomial) are added1.

There are two control leads used to control the modes of operation of the BILBO 1. During
BIST operation, the MISR mode is used for output data compaction. In order to provide a
TPG during BIST, the Z inputs to the BILBO must be held at a constant logic 01. This forces
the MISR to now operate as a LFSR generating a maximum length sequence of pseudo-
random test patterns1.

In summary, the BILBO based BIST architecture is a versatile and efficient method for
testing digital circuits, providing a balance between test coverage and hardware overhead.

write a note on cross talk glitch analysis

Cross Talk Glitch Analysis

Cross talk glitch analysis is a computational method used to diagnose and quantify the
crosstalk present on victims from aggressors1. Crosstalk can trigger both functionality errors
due to glitch injection and delay errors due to signal timing deviation2.

21 | P a g e
Crosstalk creates voltage spikes (glitches) that can be significant enough to cause a
downstream register to latch an incorrect logic state2. These glitches are extremely elusive,
and high accuracy in both glitch modeling and failure checking is vital to weeding out all
potential violations2.

Crosstalk Glitch

Crosstalk glitch or crosstalk noise is a sudden raising or falling bump or spike on the victim
net3. This occurs when one net is switching and another neighboring net is at constant logic,
and they have mutual capacitance between them3.

For instance, if the aggressor net switches from logic 0 to logic 1 and the victim net is at
constant zero, a raising spike or raising glitch is created on the victim net 3. Conversely, if the
aggressor net switches from logic 1 to logic 0 and the victim net is at constant high logic, a
falling spike or falling glitch is created on the victim net 3.

Glitch Analysis

Accurate glitch analysis involves calculating the glitch waveform, modeling the glitch
filtering characteristics of each logic gate, and gauging how an input glitch is propagated to
the output of the logic gate2. In many cases, crosstalk glitches are eliminated due to the
inherent low-pass filtering nature of CMOS logic gates2.

The solution needs to precisely calculate how glitches propagate and combine with
downstream crosstalk coupling in order to determine if the combined glitch can propagate
and disturb a register2.

In conclusion, crosstalk glitch analysis is crucial for predicting crosstalk in complicated,


high-density electronic circuits and high-speed communication systems1. It helps in
identifying potential problems and mitigating them to ensure the proper functioning of the
system.

22 | P a g e

You might also like