MMM Machine Learning-Based Macro-Modeling For Linear Analog ICs and ADC DACs
MMM Machine Learning-Based Macro-Modeling For Linear Analog ICs and ADC DACs
Abstract—Performance modeling is a key bottleneck for analog effectiveness of MOR is mostly restricted to linear circuits and
design automation. Although machine learning-based models it is very difficult to apply MOR on ADC/DACs.
have advanced the state-of-the-art, they have so far suffered An alternative to simulation-based approaches uses data-
from huge data preparation cost, very limited reusability, and
inadequate accuracy for large circuits. We introduce ML-based fitted/trained models, e.g., the early work on polynomial
macro-modeling techniques to mitigate these problems for linear models [4] and support vector machine (SVM) models [5].
analog ICs and ADC/DACs. The modeling techniques are based Recently, the development of neural network technology has
on macro-models, which can be assembled to evaluate circuit significantly advanced the progress of methods in this cate-
system performance, and more appealingly can be reused across gory. Applications of artificial neural networks (ANNs) for
different circuit topologies. On representative testcases, our
method achieves more than 1700× speedup for data preparation AMS circuit modeling include [6], [7], [8]; convolutional neu-
and remarkably smaller model errors compared to recent ML ral networks (CNNs) are used for analog placement solutions
approaches. It also attains 3600× acceleration over SPICE assessment in [9]; and graph convolutional network (GCN)
simulation with very small errors and reduces data preparation methods have been employed for estimating performance in
time for an ADC design from 40 days to 9.6 h. reinforcement learning-based analog transistor sizing [10].
Index Terms—Electronic design automation, machine learning, A customized graph neural network (GNN) technique is
macro modeling, performance modeling. developed for analog circuit performance classification in [11].
In [12], GNN is applied on analog circuit DC voltage
prediction. In [13], neural network-based transistor models are
I. I NTRODUCTION constructed and integrated with symbolic analysis. Overall,
machine learning-based models become a widespread trend
HE LACK of performance models that are simulta-
T neously fast and accurate is a primary reason why
analog/mixed signal (AMS) design automation has not
due to their promising results on fast performance estimation
time and generally good accuracy.
However, challenges also exist for machine learning-based
achieved as much success as its digital counterpart. The
models. The preparation of ML training data relies on cir-
use of SPICE simulations for performance estimation in
cuit simulations, which makes it computationally expensive.
circuit optimization is very limited due to their expensive
Even assuming 20 min for simulating a large circuit (actual
computational cost. In order to overcome this bottleneck,
simulation times could be larger), it takes half a month of
there has been extensive research on fast circuit modeling
simulation time to obtain 1000 data samples. The expensive
techniques. Symbolic analysis [1] and model order reduction
data preparation is exacerbated by the diverse performance
(MOR) [2] are two early approaches, which mathematically
metrics for different analog circuits [e.g., performance met-
derive equations of circuit transfer functions or performance.
rics of operational transconductance amplifier (OTA) include
In [3], symbolic analysis is integrated with graph represen-
gain, bandwidth (BW), and phase margin (PM), while low-
tations. However, the number of symbolic elements grows
dropout regulator (LDO) is evaluated by dropout voltage and
exponentially with respect to the size of circuit and thus
power supply rejection ratio (PSRR)], unlike digital circuits,
restricts symbolic analysis to small circuits. Similarly, the
where the metrics are uniform (power/performance/area). This
implies poor model reusability across analog circuits. A
Manuscript received 20 September 2023; revised 3 January 2024 and 11
April 2024; accepted 11 April 2024. Date of publication 19 June 2024; date
machine learning model trained from OTA is difficult to
of current version 22 November 2024. This work was supported in part by work for LDO. In addition, it is noticed in [7] that the
NSF under Grant CCF-2106725 and Grant CCF-2212346, and in part by accuracy of ANN models drops remarkably when circuit sizes
SRC under Grant GRC-CADT-3013.001. This article was recommended by
Associate Editor G. G. E. Gielen. (Corresponding author: Yishuang Lin.)
or performance range increases.
Yishuang Lin, Yaguang Li, and Jiang Hu are with the Department A few prior efforts attempt to resolve the issue of high-data
of Electrical and Computer Engineering, Texas A&M University, College preparation cost. The work in [6] shows model transferability
Station, TX 77843 USA (e-mail: [email protected]; [email protected];
[email protected]).
from schematic to layout for the same circuit, while the
Meghna Madhusudan, Sachin S. Sapatnekar, and Ramesh Harjani are model for one type of circuit does not apply for a different
with the Department of Electrical and Computer Engineering, University circuit type. In [10], knowledge transfer is restricted among
of Minnesota Twin Cities, Minneapolis, MN 55455 USA (e-mail:
[email protected]; [email protected]; [email protected]).
different process technology nodes of the same circuit design.
Digital Object Identifier 10.1109/TCAD.2024.3416894 The work in [9] explores transfer learning for CNN models.
1937-4151
c 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See https://www.ieee.org/publications/rights/index.html for more information.
Authorized licensed use limited to: UNIVERSITY OF SOUTHAMPTON. Downloaded on December 24,2024 at 12:47:08 UTC from IEEE Xplore. Restrictions apply.
LIN et al.: MMM: MACHINE LEARNING-BASED MACRO-MODELING FOR LINEAR ANALOG ICs AND ADC/DACs 4741
However, it requires much more training data for CNN models b) Accuracy of Circuit Performance Estimation:
than GNN models [11]; moreover, the knowledge transfer MMM achieves less than 1% modeling error
in [11] is restricted between two different topologies of the compared to SPICE, significantly lower than 6%
same type of circuits. In the work of circuit connectivity of ANN, 6% of CCI-NN, 4% of MTL, 4% of
inspired neural network (CCI-NN) [7], some circuit knowledge MOR, 4% of GNN, and 9% of NN-based symbolic
is pre-embeded into ANN models to reduce training samples analysis.
generation cost. In addition to the efforts attempting to reduce c) Runtime Cost of Circuit Performance Estimation:
data preparation cost, multi-task learning (MTL) is applied MMM is 3639×, 1873×, 42×, 2.9×, and 61%×
in [14] to reduce ML model training cost, where multiple tasks faster than SPICE, NN-based symbolic analysis,
are solved jointly. MOR, CCI-NN, and MTL, respectively.
We propose a new approach of sub-circuit level ML- d) ML Model Training Cost: MMM obtains training
based macro-modeling (MMM) with the objectives of largely time reduction of 2462×, 4520×, 5469×, and
reducing ML performance model construction cost via model 18255× compared to ANN, CCI-NN, GNN, and
reuse and improving model accuracy at the same time. NN-based symbolic analysis, respectively.
Although in [13] transistor level ML models can also be To the best of our knowledge, this is the first study
reused, they are too fine-grained and the consequently frequent on ML-based macro-models modeling for linear analog
calls to such models substantially slow down the estimation ICs and ADC/DACs with exploration of deep learning
of circuit level performance. A CCI-NN [7] is composed models.
by a set of sub-NNs, which appear to be similar to our
macro-models, but there is a critical difference: CCI-NN must
train the entire NN for a whole circuit and cannot train sub- II. R ELATED W ORKS
NNs individually. In contrast, each macro-model in MMM is Circuit performance modeling research has a long his-
independently trained. This difference leads to two significant tory [4], [13], [16], [17], [18], [19], [20]. In [13], transistor
consequences. First, the output of a sub-NN in [7] is generally drain-source current and small signal parameters are estimated
not associated with any physical meaning and therefore a sub- by neural networks, whose input features are transistor sizes
NN is very difficult, if not impossible, to be reused in a and terminal voltages. This method cannot scale to large
different type of circuit. Second, CCI-NN is restricted to use circuits as the models are not reusable and require separate
only neural network models while our MMM supports almost training for different circuits. Nonlinear regression model is
any ML models, including random forest (RF) and XGBoost. applied in [16] which could scale from the circuit space to the
As a result, the data preparation cost of CCI-NN is over design space. However, the equation models cannot capture
300× more than that of MMM and MMM achieves smaller higher-order details of the circuit characteristics and thus the
errors than CCI-NN with 2.9× shorter circuit performance fitting error is not satisfied. A variety of modeling methods
estimation time. The experiment part includes multi-stage have also been applied in circuit performance prediction,
amplifiers and ADC/DACs which are constituted with multiple such as symbolic analysis [13], [17], polynomial model [18],
sub-circuits. These representative circuits are widely used in Gaussian process [19], and Bayesian model [20].
circuit modeling works [7], [13]. Case studies for performance Recently, with rapid development of machine learning tech-
metrics unity gain frequency (UGF) and spurious free dynamic niques, a variety of ML-based circuit performance modeling
range (SFDR) are also compared. techniques have been explored. SVM is employed in [5] and
The contributions of this work include the following. [21] for circuit performance prediction in terms of classifi-
1) We propose techniques for building sub-circuit level cation and regression problems. SVM-based approaches are
macro-models that can be reused in performance esti- also applied with interface modeling and assume-guarantee
mation of linear analog ICs and ADC/DACs, which are reasoning for compositional and hierarchical analysis and
two types of common AMS circuits. In addition, we also design in [22], [23], and [24] for larger AMS systems, includ-
consider variable loading effects between sub-circuits in ing non-linear systems and RF systems. In [5], a projection
the MMMs. algorithm is used to assess the quality of the approximation
2) The effectiveness of MMM are validated on multiple model of the same circuit at different levels of hierarchy using
linear analog ICs and ADC/DACs, including cir- the same underlying model and has preferred utilization in
cuits with feedbacks and a circuit with over 20K top-down design flows for system-level and component-level
devices. performance metrics. The work in [7] makes use of ANN to
3) Comparisons are made with recent ML approaches of model circuit performance. In [6], transfer learning is applied
ANN [6], CCI-NN [7], MTL [14], GNN [11], and to transfer the knowledge to different technologies or post-
NN-based symbolic analysis [13] as well as MOR [15] layout design. Although the above works take the advantages
to show the following advantages of MMM. of machine learning techniques, their models are used for flat
a) Training Data Preparation Time: MMM achieves circuits performance evaluation. Thus, a separate model is
data preparation speedup of 1788×, 1791×, 357×, required for different circuits.
1787×, and 885× versus ANN, GNN, CCI-NN, Some works [25], [26], [27], [28], [29] apply hierarchical
MTL, and NN-based symbolic analysis. For an approaches, which decompose large circuits into small ones
8-bit flash ADC design, MMM reduces the data and analyze them individually. The work of [26] uses graph
preparation time from 40 days to 9.6 h. method to decompose large circuits into tree structure and
Authorized licensed use limited to: UNIVERSITY OF SOUTHAMPTON. Downloaded on December 24,2024 at 12:47:08 UTC from IEEE Xplore. Restrictions apply.
4742 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 43, NO. 12, DECEMBER 2024
Authorized licensed use limited to: UNIVERSITY OF SOUTHAMPTON. Downloaded on December 24,2024 at 12:47:08 UTC from IEEE Xplore. Restrictions apply.
LIN et al.: MMM: MACHINE LEARNING-BASED MACRO-MODELING FOR LINEAR ANALOG ICs AND ADC/DACs 4743
Authorized licensed use limited to: UNIVERSITY OF SOUTHAMPTON. Downloaded on December 24,2024 at 12:47:08 UTC from IEEE Xplore. Restrictions apply.
4744 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 43, NO. 12, DECEMBER 2024
Authorized licensed use limited to: UNIVERSITY OF SOUTHAMPTON. Downloaded on December 24,2024 at 12:47:08 UTC from IEEE Xplore. Restrictions apply.
LIN et al.: MMM: MACHINE LEARNING-BASED MACRO-MODELING FOR LINEAR ANALOG ICs AND ADC/DACs 4745
Authorized licensed use limited to: UNIVERSITY OF SOUTHAMPTON. Downloaded on December 24,2024 at 12:47:08 UTC from IEEE Xplore. Restrictions apply.
4746 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 43, NO. 12, DECEMBER 2024
TABLE II
M ACRO -M ODEL E RROR C OMPARED TO SPICE 5) Flat PEA: Almost the same as Flat RF while the
difference is that a PEA [11] model is used as the ML
engine.
6) Flat ANN: The performance of an entire circuit is
directly estimated by an ANN model like in [6]. The
number of hidden layers and the number of neurons in
each layer are set as 5 and 256 according to the paper’s
settings, respectively.
7) NN-symb: The previous work [13], which is symbolic
analysis using neural network-based transistor models.
8) CCI-NN: The recent previous work [7] aimed to training
sample reduction. Similar experiment setup is used as
are conducted on a Linux machine with a Xeon E5-2680 V2 provided in [7].
processor, 2.8-GHz frequency, and 256-G memory. 9) MOR: An MOR technique [15], [38], which is only
tested on linear analog ICs.
10) MTL: A MTL technique [14]. For linear circuits, transfer
A. Results of ML-Based Macro-Model Accuracy function coefficients of each sub-circuit are trained using
MTL, where each coefficient is a task. For ADC/DACs,
We study several options of ML engines for the macro-
circuit performance metrics of a circuit system are
models, including RF [33], XGBoost [34], and PEA [11],
trained using MTL, where each performance metric is
which is an extension of GNN from classification to regres-
a task. For MTL, the corresponding networks structure
sion. For PEA, the input data include adjacency matrix,
is shown in Fig. 11, where the first several layers in
node feature matrix and edge feature matrix. The adjacency
the neural networks are PEA layers [11] and are shared
matrix encodes the connection of transistors and other circuit
among multiple tasks. Corresponding parameters of the
components in the netlist. Node feature matrix includes circuit
PEA layers are also shared among multiple tasks. For
components’ parameters, such as transistor sizes and capaci-
each task, multiple fully connected layers are connected
tance values. Edge feature matrix includes connection of each
to the shared PEA layers. These layers are task specific
net, such as the gate type of each transistor in the net. Our
and their parameters are not shared among different
ML models are constructed using ML frameworks, including
tasks.
scikit-learn [35], TensorFlow [36], and PyTorch [37]. Each
Fig. 8 depicts the average performance estimation errors
macro-model is trained and tested for the same sub-circuit
compared with SPICE. The error of one test circuit is the aver-
with 1000 data samples, where 80% are used for training and
age percentage error among all its performance metrics, e.g.,
the other 20% are used for testing. Please note that no test
gain, UGF, BW and PM for OTAs, VGAs, and SCFs. PSRR
data is seen in training. In our proposed method, the models
is the performance metric for LDOs while for ADC/DACs,
are constructed for sub-circuits which are normally small and
performance metrics include SFDR, SNDR, DNL, and INL.
have small dimension size. Also, each model’s outputs are
We use a sine wave at 1 MHz as the input signal for
voltages and transfer function coefficients, which have less
ADC/DACs and apply an FFT at the output signal to obtain
complexity than directly estimating performance metrics. This
spectrum and SFDR as well as SNDR [39], [40], [41]. We
is one benefit for macro-modeling comparing to modeling
can observe that MMM RF and MMM XGBoost achieve
for the entire large circuit and further illustrated in Fig. 9.
the smallest errors, which are less than 1% on average and
Table II shows the comparison of macro-model errors based
significantly smaller than the 9% and 6% average errors from
on different ML engines. The result of each sub-circuit is
NN-symbolic and CCI-NN, respectively. Figs. 9 and 10 also
averaged on all its macro-models. One can see that RF and
show the estimation errors for some specific performance met-
XGBoost are more accurate than PEA while RF is the most
rics for UGF of linear analog ICs and SFDR of ADC/DACs,
accurate among them.
respectively, where similar trends can be observed.
These results partially confirm the observation [7] that the
accuracy of flat ANN performance models tends to degrade for
B. Results of Circuit Performance Estimation Accuracy large circuits or wide performance range. As the ML models of
We compare the following methods. MMM are trained for sub-circuits that have lower complexity
1) MMM RF: This is our proposed approach where macro- than entire circuits, MMM is able to attain significantly higher
models are based on RF. Tree depth and forest size are accuracy.
set as 10 and 20, respectively.
2) MMM XGBoost: Almost the same as MMM RF while
the only difference is that its ML engine is XGBoost. C. Model Construction and Circuit Performance Estimation
3) MMM PEA: Almost the same as MMM RF, but the ML Runtime
engine is regression model of PEA [11]. Circuit performance estimation runtime comparisons for
4) Flat RF: The performance of an entire circuit is directly different methods are provided in Table III. The runtime of
estimated by a RF model. Tree depth and forest size are MMM XGBoost is almost the same as MMM RF and not
set as 10 and 400, respectively. included here. Among the 8 methods being compared, MMM
Authorized licensed use limited to: UNIVERSITY OF SOUTHAMPTON. Downloaded on December 24,2024 at 12:47:08 UTC from IEEE Xplore. Restrictions apply.
4748 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 43, NO. 12, DECEMBER 2024
RF is the second fastest for performance estimation runtime and runtime. It shows that runtime is about 40× when the
and 2.9× faster than the recent approach CCI-NN [7]. It is also forest size increases from 20 in MMM RF to 400 in flat RF.
42× faster than MOR for linear circuits. Flat ANN is faster Model construction time consists of the time for obtaining
than MMM RF, however, its performance estimation errors are training data as well as model training. The former dominates
significantly larger. MTL is 61× slower than MMM RF as the latter one as many circuit simulations are required to
it has more complicated model structure than RF. Flat RF is prepare training data. Since our macro-models are reused
slower than MMM RF because it requires more decision trees in different circuits, their data preparation time and model
to build model an entire circuit system. The contour plot in training time reported in Tables IV and V are amortized,
Fig. 12 illustrates the relation between tree depth, forest size e.g., the time is scaled by 1/k if the model is reused for k
Authorized licensed use limited to: UNIVERSITY OF SOUTHAMPTON. Downloaded on December 24,2024 at 12:47:08 UTC from IEEE Xplore. Restrictions apply.
LIN et al.: MMM: MACHINE LEARNING-BASED MACRO-MODELING FOR LINEAR ANALOG ICs AND ADC/DACs 4749
TABLE IV
C OMPARISONS OF DATA P REPARATION T IME
TABLE III
C OMPARISONS OF P ERFORMANCE E STIMATION T IME
TABLE V
C OMPARISONS OF M ODELING T RAINING T IME
Authorized licensed use limited to: UNIVERSITY OF SOUTHAMPTON. Downloaded on December 24,2024 at 12:47:08 UTC from IEEE Xplore. Restrictions apply.
4750 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 43, NO. 12, DECEMBER 2024
Fig. 16. Tradeoff between training data set size and performance estimation
error for three-stage VGA.
Fig. 13. Tradeoff between training data preparation time and circuit Similarly, multiple hyperparameter settings are setup for dif-
performance estimation error for linear analog ICs. ferent neural networks in order to obtain multiple CCI-NN [7]
results. Our MMM RF shows an evident dominant against
the other two methods in terms of the tradeoff between
performance estimation runtime and estimation errors.
VII. C ONCLUSION
Although machine learning-based models has advanced
the state-of-the-art for fast analog performance estimation,
existing approaches are mostly flat models that suffer from
huge model construction cost and low reusability. This work
Fig. 14. Tradeoff between ML model training time and circuit performance
estimation error for linear analog ICs. introduces machine learning techniques in macro-model level
to address the problems for linear analog ICs and ADC/DACs.
Experimental results on circuits with up to 20K devices show
that our approach can reduce model construction cost by three
orders of magnitude compared to recent ML techniques. At
the same time, it achieves significantly smaller errors and is
three orders of magnitude faster than circuit simulation.
In future research, we will extend the macro-modeling
techniques for ADC/DACs with feedback and nonlinear analog
ICs, and study how to automatically partition a circuit system
into sub-circuits for macro-models.
R EFERENCES
[1] G. Gielen, P. Wambacq, and W. M. Sansen, “Symbolic analysis methods
Fig. 15. Tradeoff between performance estimation runtime and performance and applications for analog circuits: A tutorial overview,” Proc. IEEE,
estimation errors for three-stage VGA and two-stage SCF. vol. 82, no. 2, pp. 287–304, Feb. 1994.
[2] M. Celik, L. Pileggi, and A. Odabasioglu, IC Interconnect Analysis.
Boston, MA, USA: Kluwer, 2002.
[3] C.-J. Shi and X.-D. Tan, “Canonical symbolic analysis of large analog
on both training runtime and performance estimation error over circuits with determinant decision diagrams,” IEEE Trans. Comput.-
other methods. Aided Design Integr. Circuits Syst., vol. 19, no. 1, pp. 1–18, Jan. 2000.
[4] W. Daems, G. Gielen, and W. Sansen, “Simulation-based automatic
The tradeoff between performance estimation runtime and generation of signomial and posynomial performance models for analog
performance estimation errors on three-stage VGA and two- integrated circuit sizing,” in Proc. IEEE/ACM Int. Conf. Comput. Aided
stage SCF is shown Fig. 15, where our MMM RF, MOR, Design, 2001, pp. 70–74.
[5] F. De Bernardinis, M. I. Jordan, and A. S. Vincentelli, “Support
and CCI-NN are compared. Various numbers of model order vector machines for analog circuit performance representation,” in Proc.
are explored to obtain multiple MOR results each circuit. ACM/IEEE Design Autom. Conf., 2003, pp. 964–969.
Authorized licensed use limited to: UNIVERSITY OF SOUTHAMPTON. Downloaded on December 24,2024 at 12:47:08 UTC from IEEE Xplore. Restrictions apply.
LIN et al.: MMM: MACHINE LEARNING-BASED MACRO-MODELING FOR LINEAR ANALOG ICs AND ADC/DACs 4751
[6] J. Liu, M. Hassanpourghadi, Q. Zhang, S. Su, and M. S.-W. Chen, [29] S. X.-D. Tan, W. Guo, and Z. Qi, “Hierarchical approach to exact
“Transfer learning with Bayesian optimization-aided sampling for symbolic analysis of large analog circuits,” IEEE Trans. Comput.-Aided
efficient AMS circuit modeling,” in Proc. IEEE/ACM Int. Conf. Comput.- Design Integr. Circuits Syst., vol. 24, no. 8, pp. 1241–1250, Aug. 2005.
Aided Design, 2020, pp. 1–9. [30] R. A. Rutenbar, G. G. E. Gielen, and J. Roychowdhury, “Hierarchical
[7] M. Hassanpourghadi, S. Su, R. A. Rasul, J. Liu, Q. Zhang, and modeling, optimization, and synthesis for system-level analog and RF
M. S.-W. Chen, “Circuit connectivity inspired neural network for analog designs,” Proc. IEEE, vol. 95, no. 3, pp. 640–669, Mar. 2007.
mixed-signal functional modeling,” in Proc. ACM/IEEE Design Autom. [31] A. V. Oppenheim, J. Buck, M. Daniel, A. S. Willsky, S. H. Nawab, and
Conf., 2021, pp. 505–510. A. Singer, Signals & Systems. Upper Saddle River, NJ, USA: Prentice-
[8] S. Kamineni, A. Sharma, R. Harjani, S. S. Sapatnekar, and Hall, 1997.
B. H. Calhoun, “AuxcellGen: A framework for autonomous generation [32] N. Karmokar, A. K. Sharma, J. Poojary, M. Madhusudan, R. Harjani,
of analog and memory unit cells,” in Proc. Design, Autom. Test Europe, and S. S. Sapatnekar, “Constructive placement and routing for common-
2023, pp. 1–6. centroid capacitor arrays in binary-weighted and split DACs,” IEEE
[9] M. Liu et al., “Towards decrypting the art of analog layout: Placement Trans. Comput.-Aided Design Integr. Circuits Syst., vol. 42, no. 9,
quality prediction via transfer learning,” in Proc. Design, Autom. Test pp. 2782–2795, Sep. 2023.
Europe, 2020, pp. 496–501. [33] T. K. Ho, “Random decision forests,” in Proc. IEEE Int. Conf. Document
[10] H. Wang et al., “GCN-RL circuit designer: Transferable transistor Anal. Recognit., 1995, pp. 278–282.
sizing with graph neural networks and reinforcement learning,” in Proc. [34] T. Chen and C. Guestrin, “XGBoost: A scalable tree boosting system,”
ACM/IEEE Design Autom. Conf., 2020, pp. 1–6. in Proc. Int. Conf. Knowl. Disc. Data Min., 2016, pp. 785–794.
[11] Y. Li et al., “A customized graph neural network model for guiding [35] F. Pedregosa et al., “Scikit-learn: Machine learning in Python,” J. Mach.
analog IC placement,” in Proc. IEEE/ACM Int. Conf. Comput. Aided Learn. Res., vol. 12, no. 85, pp. 2825–2830, 2011.
Design, 2020, pp. 1–9. [36] M. Abadi et al. TensorFlow: Large-Scale Machine Learning on
[12] K. Hakhamaneshi, M. Nassar, M. Phielipp, P. Abbeel, and V. Stojanovic, Heterogeneous Systems. (2015). tensorflow. [Online]. Available: https://
“Pretraining graph neural networks for few-shot analog circuit modeling www.tensorflow.org/
and design,” IEEE Trans. Comput.-Aided Design Integr. Circuits Syst., [37] A. Paszke et al., “PyTorch: An imperative style, high-performance deep
vol. 42, no. 7, pp. 2163–2173, Jul. 2023. learning library,” in Proc. Adv. Neural Inf. Process. Syst., vol. 32, 2019,
[13] Z. Zhao and L. Zhang, “Efficient performance modeling for automated p. 721.
CMOS analog circuit synthesis,” IEEE Trans. Very Large Scale Integr. [38] B. Moore, “Principal component analysis in linear systems:
(VLSI) Syst., vol. 29, no. 11, pp. 1824–1837, Nov. 2021. Controllability, observability, and model reduction,” IEEE Trans.
[14] O. Sener and V. Koltun, “Multi-task learning as multi-objective Autom. Control, vol. 26, no. 1, pp. 17–32, Feb. 1981.
optimization,” in Proc. Adv. Neural Inf. Process. Syst., vol. 31, 2018, [39] K. Andersson and J. J. Wikner, “Characterization of a CMOS current-
pp. 1–12. steering DAC using state-space models,” in Proc. IEEE Midwest Symp.
[15] U. Baur, P. Benner, and L. Feng, “Model order reduction for linear Circuits Syst., vol. 2, 2000, pp. 668–671.
and nonlinear systems: A system-theoretic perspective,” Arch. Comput. [40] S. Hashemi and B. Razavi, “A 7.1 mW 1 GS/s ADC with 48 db
Methods Eng., vol. 21, no. 4, pp. 331–358, 2014. SNDR at Nyquist rate,” IEEE J. Solid-State Circuits, vol. 49, no. 8,
pp. 1739–1750, Aug. 2014.
[16] H. Liu, A. Singhee, R. A. Rutenbar, and L. R. Carley, “Remembrance
[41] J. A. Mielke, “Frequency domain testing of ADCs,” IEEE Design Test
of circuits past: Macromodeling by data mining in large analog design
Comput., vol. 13, no. 1, pp. 64–69, Mar. 1996.
spaces,” in Proc. ACM/IEEE Design Autom. Conf., 2002, pp. 437–442.
[42] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,”
[17] H. Zhang and A. Doboli, “Fast time-domain simulation through com-
2014, arXiv:1412.6980.
bined symbolic analysis and piecewise linear modeling,” in Proc. IEEE
Int. Behav. Model. Simulat. Conf., 2004, pp. 141–146.
[18] T. McConaghy and G. Gielen, “Analysis of simulation-driven numer-
ical performance modeling techniques for application to analog Yishuang Lin (Member, IEEE) received the
circuit optimization,” in Proc. IEEE Int. Symp. Circuits Syst., 2005, B.E. degree from the University of Science and
pp. 1298–1301. Technology of China, Langfang, China, in 2019,
[19] I. Guerra-Gómez, T. McConaghy, and E. Tlelo-Cuautle, “Study of and the Ph.D. degree from Texas A&M University,
regression methodologies on analog circuit design,” in Proc. 16th Latin- College Station, TX, USA, in 2023.
American Test Symp. (LATS), 2015, pp. 1–6. His research interests include machine learn-
[20] F. Wang et al., “Bayesian model fusion: Large-scale performance ing application on design automation and physical
modeling of analog and mixed-signal circuits by reusing early-stage design automation for analog/mixed-signal IC.
data,” IEEE Trans. Comput.-Aided Design Integr. Circuits Syst., vol. 35,
no. 8, pp. 1255–1268, Aug. 2016.
[21] T. Kiely and G. Gielen, “Performance modeling of analog integrated
circuits using least-squares support vector machines,” in Proc. Design,
Autom. Test Europe Conf. Exhibition, vol. 1, 2004, pp. 448–453. Yaguang Li received the B.E. degree from the North
[22] F. De Bernardinis, P. Nuzzo, and A. S. Vincentelli, “Mixed signal design China University of Technology, Beijing, China,
space exploration through analog platforms,” in Proc. 42nd Annu. Design in 2015, the M.E. degree from the University of
Autom. Conf., 2005, pp. 875–880. Chinese Academy of Sciences, Shanghai, China,
[23] P. Nuzzo, A. Sangiovanni-Vincentelli, X. Sun, and A. Puggelli, in 2018, and the Ph.D. degree from Texas A&M
“Methodology for the design of analog integrated interfaces using University, College Station, TX, USA, in 2022.
contracts,” IEEE Sensors J., vol. 12, no. 12, pp. 3329–3345, Dec. 2012. He is currently with Nvidia, Austin, TX, USA.
[24] X. Sun, P. Nuzzo, C.-C. Wu, and A. Sangiovanni-Vincentelli, “Contract- His research interests include machine learning
based system-level composition of analog circuits,” in Proc. 46th Annu. for analog IC automation/advanced standard cell
Design Autom. Conf., 2009, pp. 605–610. automation.
[25] G. Gielen, T. McConaghy, and T. Eeckelaert, “Performance space
modeling for hierarchical synthesis of analog integrated circuits,” in
Proc. 42nd Annu. Design Autom. Conf., 2005, pp. 881–886.
Meghna Madhusudan received the B.E. degree
[26] J. Starzyk and A. Konczykowska, “Flowgraph analysis of large elec-
from the R.V. College of Engineering, Bengaluru,
tronic networks,” IEEE Trans. Circuits Syst., vol. 33, no. 3, pp. 302–315,
India, in 2017, and the master’s degree from the
Mar. 1986.
University of Minnesota, Minneapolis, MN, USA,
[27] M. Hassanpourghadi, R. A. Rasul, and M. S.-W. Chen, “A module-
in 2021, where she is currently pursuing the Ph.D.
linking graph assisted hybrid optimization framework for custom analog
degree.
and mixed-signal circuit parameter synthesis,” ACM Trans. Design
She is with Analog Devices Inc., Wilmington,
Autom. Electron. Syst., vol. 26, no. 5, pp. 1–22, 2021.
MA, USA. Her research is focused on analog layout
[28] S. X.-D. Tan, “A general hierarchical circuit modeling and simulation
automation, and process variability characterization
algorithm,” IEEE Trans. Comput.-Aided Design Integr. Circuits Syst.,
for analog circuits.
vol. 24, no. 3, pp. 418–434, Mar. 2005.
Authorized licensed use limited to: UNIVERSITY OF SOUTHAMPTON. Downloaded on December 24,2024 at 12:47:08 UTC from IEEE Xplore. Restrictions apply.
4752 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 43, NO. 12, DECEMBER 2024
Sachin S. Sapatnekar (Fellow, IEEE) received Jiang Hu (Fellow, IEEE) received the B.S. degree
the B.Tech. degree from the Indian Institute of in optical engineering from Zhejiang University,
Technology, Bombay, Mumbai, India, in 1987, the Zhejiang, China, in 1990, the M.S. degree in physics
M.S. degree from Syracuse University, Syracuse, in 1997 and the Ph.D. degree in electrical engineer-
NY, USA, in 1989, and the Ph.D. degree from ing from the University of Minnesota in 2001.
the University of Illinois, Champaign, IL, USA, in He is a Professor with the Department of
1992. Electrical and Computer Engineering, Texas A&M
He teaches with the University of Minnesota, University, College Station, TX, USA. His research
Minneapolis, MN, USA, where he holds a interests include EDA, computer architecture, and
Distinguished McKnight University Professorship hardware security.
and the Henle Chair. Dr. Hu received the Best Paper Awards at DAC
Dr. Sapatnekar has received 12 Best Paper Awards (including two ICCAD 2001, ICCAD 2011, MICRO 2021, and ASPDAC 2023. He served as the
10-year Retrospective Most Influential Paper Awards), the SRC Technical General Chair for the ACM International Symposium on Physical Design
Excellence Award, and the SIA University Research Award. He is a Fellow 2012 and the Technical Program Co-Chair for the ACM/IEEE Workshop on
of the ACM. Machine Learning for CAD 2023.
Authorized licensed use limited to: UNIVERSITY OF SOUTHAMPTON. Downloaded on December 24,2024 at 12:47:08 UTC from IEEE Xplore. Restrictions apply.