0% found this document useful (0 votes)
117 views6 pages

IR Drop Prediction of ECO-revised Circuits Using Machine Learning

Uploaded by

Gnanavel B K
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
117 views6 pages

IR Drop Prediction of ECO-revised Circuits Using Machine Learning

Uploaded by

Gnanavel B K
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

2018 IEEE 36th VLSI Test Symposium (VTS)

! IR Drop Prediction of ECO-Revised Circuits


Using Machine Learning
Shih-Yao Lin1, Yen-Chun Fang1, Yu-Ching Li1, Yu-Cheng Liu1, Tsung-Shan Yang1, Shang-Chien Lin1,Chien-Mo Li1
Eric Jia-Wei Fang2
1
Graduate Institute of Electronics Engineering
National Taiwan University, Taipei 106, Taiwan
2
MediaTek Inc., Hsinchu 300, Taiwan

Abstract — Excessive power supply noise (PSN), such as IR without PSN consideration. However, it has been shown
drop, can cause timing violation in VLSI chips. However, that IR drop analysis is inaccurate if PSN is ignored [12].
simulation PSN takes a very long time, especially when Unfortunately, realistic large circuits are difficult for
multiple iterations are needed in IR drop signoff. In this machine learning since the dimension is very large.
work, we propose a machine learning technique to build an IR
Power-aware dynamic IR drop prediction of cells can be
drop prediction model based on circuits before ECO (engineer
change order) revision. After revision, we can re-use this found in [13]. They used linear model to predict the IR
model to predict the IR drop of the revised circuit. Because drop of cells. However, the prediction rule is based on
the previous circuit(s) and the revised circuit are very similar, designer’s experience, which cannot be generalized and
the model can be applied with small error. We proposed automated. So far, there is still no good machine learning
seven feature extractions, which are simple and scalable for technique available to predict PSN for large circuits.
large designs. Our experiment results show that prediction
accuracy (average error 3.7mV) and correlation (0.55) are Fig. 1 shows the traditional flow of IR drop analysis.
very high for a three million-gate real design. The run time After each circuit revision, we need to rerun the IR drop
speedup is up to 30X. The proposed method is very useful analyzer to make sure there is no violation. The source of
for designers to save the simulation time when fixing the IR patterns can be either functional patterns or test patterns.
drop problem. Because real design process needs many revisions, repeated
IR-drop analysis during each iteration can be very time
Keywords — power supply noise, IR drop analyzer, machine
consuming.
learning

I. INTRODUCTION
Power supply noise (PSN) has become an important
concern for VLSI system design and test [1, 2]. Excessive
PSN degrades circuit performance, which even leads to
timing failure [3, 4]. It is a well-known problem that
excessive PSN can induce significant yield loss (overkill)
[5, 6, 7]. PSN include IR drop and Ldi/dt noise. Since
IR drop is more significant than the Ldi/dt noise for on-chip
power integrity analysis, this paper will focus on the IR
drop effect only.
Traditional dynamic IR drop analyzer solves large
linear equation systems to obtain the IR drop of every node
in the circuit, and then simulate critical paths to verify if
there is any IR drop violation [8]. However, this process Fig. 1. Traditional IR drop analysis flow
is very slow, especially when multiple iterations are needed In this work, we propose to use machine learning to
in IR drop signoff. For an industry scale design (~3M build an IR drop prediction model for the circuit(s) before
gate count), IR drop analysis can take up to one day. revision. After a circuit revision, we can re-use this model
Every time a minor revision is made, the whole process has to predict the IR drop of the revised circuit. After the
to be repeated, even if the revised circuit just changed a predicted IR drop meets our specification, we need to rerun
small number of cells. the dynamic IR drop analyzer again to make sure there is
It has been shown that machine learning prediction of indeed no violation before the final signoff. This work
circuit speedpath [9] and timing signoff [10] is feasible. has three major contributions. We take advantage of the
Recently, Ye et al.[11] developed an SVM-based similarity between the original circuit and the revised
regression method to predict circuit delay at runtime circuit to learn a model to speed up the signoff process so
very few dynamic IR drop analyses are needed. This new

978-1-5386-3774-6/18/$31.00 ©2018 IEEE


Authorized licensed use limited to: SRM Institute of Science and Technology. Downloaded on August 10,2023 at 04:42:29 UTC from IEEE Xplore. Restrictions apply.
!

flow saved a lot of iterative simulation time during revision. [19]. This work proposed feature extraction so it is
Second, we propose to sample a small portion of cells to scalable for large designs. However, it did not consider
! IR drop of all cells. This greatly reduces the size
predict the ECO revision issue. Every time a new revision is
of input data so that machine learning of realistic industrial made, a new model is needed.
design is feasible. Third, we propose seven simple but
C. Dynamic IR drop Analyzer
important feature extraction methods to greatly reduce the
dimension, so the proposal is scalable for large designs. Our proposed machine learning technique can be
Our experiment results on a three million-gate GPU show applied to speed up any circuit IR drop analyzer. In this
that average error of prediction IR drop is 3.7mV and paper, we use a PSN-aware dynamic IR drop analyzer,
correlation is 0.55. The run time speedup is up to 30X IDEA (IR drop-aware Efficient timing Analyzer) as our
compared to a commercial tool Ansys RedHawk. The benchmark simulator [12]. This technique is very scalable
proposed method is very useful for designers to save the because they model the voltage-delay characteristic
simulation time during ECO to fix IR drop problems. function in a simple analytical function, which just require
limited simulation of library cells. Experimental results
The rest of this paper is organized as follows. Section showed that, for small circuits, the error is less than 5%
II provides previous research papers in PSN-aware IR compared with HSPICE. Although IDEA is up to 272
analysis. Section III presents the proposed machine times faster than a commercial tool, NANOSIM, it still
learning technique. Section IV shows experimental takes days to simulate million-gate designs.
results on benchmark circuits. Finally, Section V
concludes this paper. III. PROPOSED TECHNIQUE

II. PAST RESEARCH A. Proposed Flow


Fig. 2 shows the proposed flow of our work. During
A. Statistical IR drop Prediction
the design phase, we have several ECO-revised circuits,
Many different metrics have been proposed as including previous versions (…, En-2, En-1) and the current
alternatives to IR drop, such as weighted switching activity version (En). Suppose that we have performed dynamic
(WSA) [14, 15], switching cycle average power (SCAP) IR drop analysis on previous versions, using dynamic IR
[16], flip-flop toggle count (FFTC) [17], and etc. drop analyzer, such as Ansys Redhawk[8]. We can then
Although some metrics show good correlations with actual extract important features from a small number of sampled
IR drop values, there is no known model to translate the cells. After that, we run a machine learning to build a
proposed metrics to the actual IR drop values. It is not model for this circuit so we can re-use this model to predict
clear what is the pass/fail threshold for these metrics. IR drop of the current version (En). Designers can use our
Therefore, it is impossible to use these alternative metrics prediction results to quickly evaluate whether IR drop of
to sign off a design. A recent paper used a linear model to the current version meets the specification or not. Our
predict the IR drop values [13]. For each cell, they machine learning prediction can save a lot of IR drop
calculated a linear model to predict the IR drop based on analysis runtime during iterations. Finally, when the
the power consumption. The problem of the linear model predicted IR drop all meet our specification, we need to run
is that it may not be good enough for complex designs. In the dynamic IR drop analyzer again to make sure there is
addition, it is computationally expensive to calibrate a indeed no violation before the final signoff. Compare Fig.
linear model for each cell in large designs. A paper tried 2 with Fig. 1, we can save simulation time during the
to identify high power area (hot-spot) using switching prediction phase.
probability and logic level [18]. Although we see a
correlation between real hot-spot and the predicted area, it
is still not clear what is the pass/fail threshold for design
sign off.
B. Machine Learning IR drop Prediction
Machine learning has been applied to identify
speedpath outliers [9]. Various feature extractions have
been performed based on topology, dynamic effects, static
effects, statistical effects, and random effects.
Nevertheless, it did not consider IR drop effects. Support
vector machine has been applied to predict IR drop [11].
This technique was implemented on FPGA to dynamically
adjust the CPU operation frequency. Their technique used
only input patterns, no feature extraction, to predict IR drop.
The number of dimensions is very large and therefore it is
not scalable for large designs. Another previous work is
IR-drop-aware timing prediction using machine learning Fig. 2. Proposed IR Drop Prediction flow

Authorized licensed use limited to: SRM Institute of Science and Technology. Downloaded on August 10,2023 at 04:42:29 UTC from IEEE Xplore. Restrictions apply.
!

B. Cell Sampling and Feature Extraction


Because
! there are many cells in a real design, it is
impractical to use all cells to build a machine learning
model. In this research, we propose to sample a portion of
cells to build a model. Two factors should be considered
when we take samples: (1) physical location and (2) IR
drop values. For (1), we divide the chip layout into M x N
windows. Based on our experience, we take 5% ~ 10%
sampling cells randomly from each window. For (2), we Fig. 3. Neighbors of a sampled cell
sort all cells by their IR drops. We take sampled from Neighbor toggle rate (NTR) is the toggle rate among all
three categories of cells: serious IR drop, medium IR drop neighbor cells. Because different cell types have different
cells, and low IR drop. impact on IR drop, so we need to count NTR according to
Table I shows the features we consider in this work. cell types. For each cell type, toggle rates of the same cell
Given a sampled cell, there are three categories of features. type are summed up. This feature is a vector, whose
Power features of a sampled cell include the power of this dimension equals to the number of cell types. NTR of a
cell, toggle rate of this cell, and type of this cell. Physical sampled cell s is shown as the following equation (1).
features include this cell location (i.e. X, Y coordination), ∑ , … ,∑ (1)
∈ ∈
toggle rate of neighbor cells, and neighbor count (number
of cells in the neighborhood). Finally, the via feature is , where indicates the toggle rate of the cell of
the distance to via. Each feature is explained as follows. type t, and W is the neighbor window of the sample cell s.

TABLE I. SEVEN FEATURES OF A SAMPLED CELL


Neighbor count (NC) means the total count of neighbor
cells. This feature is a scalar, defined in the following
categories 1 2 3 equation (2), where is an indicator variable (1
Power Cell power Cell Cell type
features toggle rate means the presence of the cell, and 0 otherwise).
Physical Cell location Neighbor Neighbor count ∑ ∈ (2)
features toggle rate
Via feature Distance to via Distance to via (D) is the distance to the closest power
Cell power is the power consumption of the sampled via, and this via must be in the same row as the sampled
cell given a set of input patterns. Cell toggle rate cell. Fig. 4 shows the definition of D. The sampled cell
measures the switching activity of the sampled cell. is in the middle of the window and the red rectangles
Toggle rate is defined as the number of toggles over the represent power vias. Number D represents the resistance
number of clock cycles and it is a number between 0% and value from the sampled cell to the power network.
200%. The reason for 200% toggle rate is because clock
buffers toggle twice in each cycle. Both cell power and
cell toggle rate are scalar variables that can be obtained by
a dynamic IR drop analyzer, such as Redhawk. Cell type
is the logic gate type of the sampled cell, such as NAND,
NOR, and etc. This is a categorical scalar variable, which
can be obtained from the netlist or the IR drop analysis
report.
Fig. 3 shows how to define neighbors for a given Fig. 4 Distance to via
sampled cell. We draw a rectangle window, centered at
the given sampled cell. The window height and width can Totally, we propose seven features of dimension (T+7),
be adjusted by the user. Different technology may have where T is the total number of cell types used in the design.
different setting. In this work, we increasingly enlarge the This is a very small dimension and scalable for large
window size and observed the prediction accuracy under designs.
different window width and height. After several C. Machine Learning Prediction Model
experiments, our ANN model reached the highest Artificial neural network (ANN) [20] imitated the
prediction accuracy when window height is set to three row neural structure of human’s brain. Figure 5 shows an
heights and window width is 50.
example ANN model with one hidden layer, where is
the input features of nth sampled cell, and is the target
IR drop value of the nth sampled cell. w is the weight of
neurons in ANN and N is the number of training data.

Authorized licensed use limited to: SRM Institute of Science and Technology. Downloaded on August 10,2023 at 04:42:29 UTC from IEEE Xplore. Restrictions apply.
!

First, we want to know how many samples we need to


build a model with high prediction accuracy. In this
! experiment, we sampled a small portion of cells and predict
IR drop of all cells. The training data and predicting data
are from the same design (E1). There is no ECO-revision
in this experiment. The prediction accuracy for three
benchmark circuits is plotted in Fig. 6. We observe that
NRMSE drops quickly when sampling cells increase. We
can see that NRMSE remain constant when the percentage
of sampled cells is more than 10%. These experiments
show that 10% sampling is enough for our designs.
Please note that IR drop is highly design-dependent. Each
Fig. 5 ANN Model (with one hidden layer)
design has a unique model, even if they are the same
Our goal is to find a function , to minimize technology.
the following error function,
∑ ‖ , ‖ (3)

IV. EXPERIMENTAL RESULTS


Three ITC’99/IWLS’05 benchmark circuits (b18, b19,
leon3mp) in 45nm and one real GPU (Graphic Processor
Unit) in 16nm technology from the industry have been
evaluated by our proposed method. Profiles of these four
circuits are shown in Table II. The first column shows the
number of cells. Given a commercial ATPG test pattern Fig. 6. Prediction accuracy vs. number of samples (before ECO)
set, three ITC/IWLS benchmark circuits have been
simulated by our own dynamic IR drop simulator, IDEA Table III displays prediction results of four benchmark
[12]. The second column shows the number of patterns circuits. The machine learning model is trained by data
simulated. The max IR drop and the average IR drop of from 10% sampled cells of the first edition E1. Both
the circuit are shown in the third and fourth columns. NRMSE and CC are very good. As shown in the table,
Dynamic IR drop analysis of the GPU was performed by a machine learning can predict IR drop accurately, without
commercial tool, Ansys RedHawk. All the machine any ECO-revision, compared to simulation results. To
learning experiments use the artificial neural network open evaluate the ANN technique, we also tried the extra tree
source FANN [21]. The experiments were run on Intel technique [22]. Results of three benchmark circuits are
Xeon CPU E5520 @ 2.27GH with 32GB RAM. very similar to those of ANN.

TABLE II. PROFILE OF BENCHMARK CIRCUITS (E1, BEFORE ECO) TABLE III. EXPERIMENT RESULTS (TRAINING=PREDICTION=E1)

VDD Avg. IR Max Feature NRMSE CC


Circuit Cells Patterns Circuit
(V) drop(mV) IR drop(mV) Dimension ANN, Tree ANN, Tree
b18 64K 50 1.1 29 39 b18 42 8.7%, 6.8% 0.94, 0.95
50 1.1 b19 44 6.6%, 6.1% 0.94, 0.94
b19 128K 59 83
leon3mp 55 3.3%, 4.4% 0.98, 0.98
leon3mp 638K 50 1.1 92 241 GPU 1,201 6.7%, NA 0.78, NA
GPU 3,006K 240 0.9 25 190
Correlation coefficient (CC) is defined in equation (6).
A. IR Drop Prediction before ECO Smaller NRMSE and bigger CC indicates better results.
We first evaluate the effectiveness of the IR drop ∑
prediction for the circuit before engineering change order 6
(ECO). Prediction accuracy is measured by Normalized ∑ ∑
root mean square error (NRMSE), which is defined in
equation (4) and (5). In these equations, is the B. IR Drop Prediction after ECO
simulated IR drop of the ith sample cell, and is the We evaluate the effectiveness of the IR drop prediction
predicted IR drop of the ith sample cell. N is the number for circuits after ECO. First, we use the original circuit as
of data. edition E1. Then, we use Cadence SOC Encounter to

move 13 and 7 serious IR drop cells in benchmark circuits
(4) b18 and b19, respectively, to produce a new edition E2.
For benchmark circuit leon3mp, we add one power stripe to
∗ 100% (5) produce edition E2. Then we move 32 cells to produce

Authorized licensed use limited to: SRM Institute of Science and Technology. Downloaded on August 10,2023 at 04:42:29 UTC from IEEE Xplore. Restrictions apply.
!

edition E3. Three editions of GPU are real data from Fig. 7 is error distribution of leon3mp and GPU (top
MediaTek. Table IV shows the prediction accuracy of 10% worst cells in Table VI). Totally 60K and 300K cells
four ! circuits after ECO. Machine learning model is for leon3mp and GPU, respectively. 99.9% of errors are
trained by data from 10% sampled cells of the first edition smaller than 15% of max IR drop (36mV to leon3mp and
(E1). And then we use the model to predict the second (E2) 28.5mV to GPU). Red lines mean the 15% boundary.
and the third edition circuits (E3). Only 10 cells (out of 60K) in leon3mp and 22 cells (out of
300K) in GPU are under-predicted.
TABLE IV. PREDICTION RESULTS OF FOUR CIRCUITS AFTER ECO
E1 E2 E3
Circuit NRMSE CC NRMSE CC NRMSE CC
b18 8.7% 0.94 11.2% 0.88 - -
b19 6.6% 0.94 9.7% 0.93 - -
leon3mp 3.3% 0.98 6.1% 0.98 7.7% 0.98
GPU 6.7% 0.78 9.0% 0.59 11.2% 0.61
We can see from Table IV that our machine learning
model has the best prediction accuracy when predicting the
first edition circuit. E1. As the number of revision Fig. 7a. Leon3mp error distribution (60K cells)
increases, prediction accuracy becomes worse. Therefore,
it is important to train the model using the most recent Fig. 7b. GPU error distribution (300K cells)
revision. Table V and Table VI use both previous editions Fig. 8 shows the plot of simulation IR drop results
(E1 and E2) data to improve the prediction accuracy of the versus predicted IR drop for leon3mp and b18. Training
third edition (E3). Table V shows the prediction results of data are E1 plus E2 and prediction data is E3. Y axis
randomly sampled 10% cells in E3. Table VI shows the represents simulated IR drop. X axis represents predicted
prediction results of top 10% serious IR drop cells in E3. IR drop. Correlation of simulated IR drop and predicted
With both E1 and E2 data in the training, the prediction IR drop is 0.98 for leo3map and 0.88 for b18.
accuracy is much better than that of using E1 data only
(Table IV). Average error is defined in equation (7).
Max Error is defined in equation (8).

Average Error (7)

Max Error max , 1 to (8)

where and are simulated IR drop and predicted IR


drop of the ith sample cell, respectively. A positive error Fig. 8(a). Leon3mp Fig. 8(b). b18
means under-prediction but a negative error means Fig. 9 shows the IR drop map of leon3mp E3 circuit.
over-prediction. The average error of leon3mp is 5mV, Fig. 9a is simulated IR drop map and Fig 9b is predicted IR
5% of the average IR drop values. The average error of drop map. Green area is low IR drop area, yellow area is
GPU is 3.7mV, which is 15% of the average IR drop values. medium IR drop area, and orange area is serious IR drop
The max error is 40mv, which is about 20% of the worst area. Red dots are high IR drop cells. Correlation
case IR. between simulated IR drop map and predicted IR drop map
TABLE V. PREDICTION RESULTS OF E3 CIRCUIT (TRAINED BY E1+E2)
is high.

E3
Circuit NRMSE CC Avg. Error
leon3mp 3.4% 0.98 3.8mV
GPU 6.8% 0.81 3.3mV

TABLE VI. PREDICTION OF TOP 10% SERIOUS IR DROP CELLS OF E3


E3
Circuit NR CC Avg. IR Avg. Max IR Max
MSE drop Error drop Error
(mV) (mV) (mV) (mV) Fig. 9a. Leon3mp simulated IR drop map
leon3mp 5.0 49.0
3.7% 0.54 92 241 Fig. 9b. Leon3mp predicted IR drop map
(5%) (20%)
GPU 3.7 39.3
7.4% 0.55 25 190
(15%) (21%)

Authorized licensed use limited to: SRM Institute of Science and Technology. Downloaded on August 10,2023 at 04:42:29 UTC from IEEE Xplore. Restrictions apply.
!

C. Runtime 30X. The proposed method is very useful for designers to


Table VII shows the runtime comparison between save the simulation time to fix the IR drop problem.
!
proposed technique and commercial tools. In the
REFERENCES
proposed flow, we only need one feature extraction plus
[1] Kenneth L. Shepard, and Vinod Narayanan. "Noise in deep
one training in the training phase. In the prediction phase, submicron digital design." Proc. international conference on
we need one feature extraction plus one (or more) Computer-aided design. IEEE Computer Society, 1997, pp.
prediction. Total time of proposed technique is two 524-531.
feature extraction time plus one training time plus one [2] Mohammad Tehranipoor, and Kenneth M. Butler. "Power supply
noise: A survey on effects and research." IEEE Design & Test of
prediction time. Although we cannot save time for small Computers 2.27 (2010): 51-67.
circuits, we can save a significant amount of simulation [3] Howard H. Chen, and David D. Ling. "Power supply noise analysis
time for large circuits. We need only 13 minutes to methodology for deep-submicron VLSI chip design." Proc. Design
predict IR drop for GPU whereas RedHawk needs almost Automation Conference. ACM, 1997, pp. 638-648.
[4] Yi-Min Jiang, and Kwang-Ting Cheng. "Analysis of performance
one day to simulate the circuit. The run time speedup is impact caused by power supply noise in deep submicron
shown in the parenthesis (including feature extraction and devices." Proc. Design Automation Conference. ACM, 1999, pp.
training). Our technique significantly reduces the IR drop 760-765.
simulation time. [5] L.-C. Wang, D.M.H. Walker, A. Majhi, B. Kruseman, G.
Gronthoud, L.E. Villagra, P. van de Wiel,S. Eichenberger, "Power
supply noise in delay testing," Proc. International Test Conference,
TABLE VII. RUNTIME COMPARISON
pp. 1-10, 2006.
Circuit b18 b19 leon3mp GPU [6] Yi-Hua Li, et al. "Capture-power-safe test pattern determination for
Feature at-speed scan-based testing." IEEE Transactions on computer-aided
12s 32s 147s 11m27s design of integrated circuits and systems 33.1 (2014): 127-138.
Extraction
[7] P. Girard, CW Wu, X Wen, Power-aware testing and test strategies
Training 51s 106s 204s 24m57s for low power devices, 2010.
Prediction 1s 2s 19s 1m29s [8] Apache RedHawk User Manual, 2015.
Total time 76s 172s 517s 49m20s [9] P. Bastani, K. Killpack, Li.-C. Wang, E. Chiprout, "Speedpath
prediction based on learning from a small set of examples," Proc.
46s 95s 734s
NANOSIM - Design Automation Conference, 2008, pp. 217 - 222.
(0.6X) (0.55X) (1.4X)
[10] Andrew B. Kahng, Mulong Luo, and Siddhartha Nath. "SI for free:
1 day machine learning of interconnect coupling delay and transition
RedHawk - - -
(30X) effects." System Level Interconnect Prediction (SLIP), 2015
ACM/IEEE International Workshop on. IEEE, 2015, pp. 1-8.
[11] Fangming Ye, Firouzi, F., Yang Yang, K. Chakrabarty, M.B.
V. DISCUSSION Tahoori, "On-chip voltage-droop prediction using support-vector
machines," Proc. VLSI Test Symposium, 2014.
For ANN to work well, both the number of hidden [12] C.-Y. Han, Y.-C. Li, H.-T. Kan, and James C.-M. Li,
layers and neurons should be carefully tuned. Using too “Power-Supply-Noise-Aware Test Pattern Analysis and
few neurons will result in underfitting. It occurs when Regeneration for Yield Improvement,” IEICE,
there are too few neurons to detect important information in Vol.E99-A,No.12,pp.-,Dec. 2016.
[13] Yuta Yamato, “A Fast and Accurate Per-Cell Dynamic IR drop
a large data set. Too many neurons may lead to overfitting. Estimation Method for At-Speed Scan Test Pattern Validation,”
In this work, we tried two, three up to four hidden layers. Proc. International Test Conference 2012.
We found that two hidden layers with twenty neurons for [14] Jeremy Lee, et al. "Layout-aware, IR-drop tolerant transition fault
each hidden layer are enough for our data set. Too many pattern generation." Proc. Design, automation and test in Europe.
ACM, 2008, pp. 1172-1177.
layers would not improve the accuracy, and too many [15] Junxia Ma, Jeremy Lee, and Mohammad Tehranipoor.
neurons would lead to overfitting. "Layout-aware pattern generation for maximizing supply noise
Our proposal is good for the design sign-off stage, when effects on critical paths." 2009 27th IEEE VLSI Test Symposium.
the revised circuit is very similar to its previous version. IEEE, 2009, pp. 221-226.
[16] N. Ahmed and M. Tehranipoor, “Transition Delay Fault Test
Every time we add a new version, we would need to add Pattern Generation Considering Supply Voltage Noise in a SOC
this new version to our training so that this assumption can Design,” Proc. Design Automation Conference, 2007, pp. 533-538.
be valid. [17] Xiaoqing Wen, et al. "Low-capture-power test generation for
scan-based at-speed testing." IEEE International Conference on
VI. CONCLUSIONS Test, IEEE, 2005, pp. -1028.
[18] Kohei Miyase, et al. "Identification of high power consuming areas
In this work, we have proposed an IR drop prediction with gate type and logic level information." 2015 20th IEEE
for ECO-revised circuits using artificial neural network. European Test Symposium (ETS). IEEE, 2015, pp. 1-6.
[19] Yu-Cheng Liu, Cheng-Yu Han, Shih-Yao Lin, and James Chien-Mo
We sampled a small portion of cells on a die to train the Li, “PSN-aware Circuit Test Timing Prediction using Machine
neural network. We proposed seven feature extractions, Learning,” Proc. IET Computers & Digital Techniques, 2016.
which are simple and scalable for large designs. Our [20] Warren S. McCulloch, and Walter Pitts. "A logical calculus of the
experiment results show that prediction accuracy (average ideas immanent in nervous activity."
The bulletin of mathematical biophysics 5.4 (1943): 115-133.
error 3.7mV) and correlation (0.55) are very high for a 3 [21] Fast Artificial Neural Network. Available:
million-gate real design. The run time speedup is up to http://libfann.github.io/fann/docs/files/fann-h.html
[22] P. Geurts, and et. al. “Extremely Randomized Trees,” Machine
Learning, Springer, 2006.

Authorized licensed use limited to: SRM Institute of Science and Technology. Downloaded on August 10,2023 at 04:42:29 UTC from IEEE Xplore. Restrictions apply.

You might also like