Yu - 2023 - Machine Learning in EDA When and How
Yu - 2023 - Machine Learning in EDA When and How
Bei Yu
Chinese University of Hong Kong
2023 ACM/IEEE 5th Workshop on Machine Learning for CAD (MLCAD) | 979-8-3503-0955-3/23/$31.00 ©2023 IEEE | DOI: 10.1109/MLCAD58807.2023.10299822
Front-End
input in[3];
and gigantic data, recently there has been a surge in applying and ...
EDA
adapting machine learning to accelerate the design closure. In this end module HLS
Back-End
Placement
distinct challenges in EDA, including improved netlist representa-
EDA
tion, advanced timing modeling, netlist-layout multimodality, and
Routing
constrained AIGC.
Fig. 1 Typical ML methodologies in different EDA stages.
I. M ACHINE L EARNING IN EDA
Over the past few decades, there has been a noticeable trend to characterize the relevant features. This enables BOOM-
towards increasing standardization and complexity in the field Explorer to explore microarchitecture designs that strike an
of Electronic Design Automation (EDA). The contemporary optimal balance between power consumption and performance,
chip design flow can be divided into two distinct parts: firstly, while significantly reducing the time required for DSE.
the front-end EDA, encompassing architectural design, high- High-level synthesis (HLS) transforms a high-level specifi-
level synthesis, and logic synthesis; and secondly, the back- cation of an integrated circuit (IC) into a register-transfer level
end EDA, comprising placement and routing. As semiconductor (RTL) description. However, the synthesis process can be time-
technology continues to advance, the scale of integrated circuits consuming, especially for large-scale systems. To address this
has grown exponentially, presenting significant challenges to challenge, machine learning algorithms, including feature engi-
the scalability and reliability of the chip design flow. neering and graph neural networks (GNN), have been employed
Several recent survey papers have been published on the topic to accelerate HLS. For instance, Sun et al. [5] developed a novel
of ML-EDA. Rapp et al. [1] presented a comprehensive survey correlated multiobjective and multi-fidelity Gaussian process
of the application of machine learning in the optimization and (CGP) model to effectively handle the relationships among
exploration strategies of integrated circuits, along with trends various design objectives, thereby improving the efficiency of
in the employed ML algorithms. Huang et al. [2] categorized HLS. Additionally, Ferretti et al. [6] proposed the use of graph
existing ML studies in the EDA field into four distinct cat- representation learning from software specifications. Fine-tuned
egories: decision-making, performance prediction, black-box with a few-shot learning approach, the learned model enables
optimization, and automated design, ordered by increasing effective DSE after a short training period.
degree of automation. Chen et al. [3] summarized the state- Following high-level synthesis, logic synthesis is responsible
of-the-art research in ML-EDA, utilizing a taxonomy of ML for transforming the RTL description of a circuit into a gate-
methodologies, and offered potential insights from previously level representation in the target technology. Notably, both
resolved EDA problems. It is worth noting that their focus GNN and convolutional neural networks (CNN) have been
is primarily on applied ML algorithms in recent years, which leveraged to expedite the logic synthesis process. Xu et al. [7]
complements our survey that specifically examines the unique introduced SNS, which combines GNN and CNN to predict the
challenges encountered in EDA. area, power, and timing of a wide range of designs. By leverag-
In this section, we aim to provide a comprehensive survey on ing this approach, the delay in obtaining synthesis results can be
the application of ML methods in each stage of the EDA flow, significantly reduced. Wang et al. [8] developed a novel frame-
as illustrated in Figure 1. We will commence by discussing the work for netlist representation learning based on contrastive
microarchitecture design of a processor, which aims to define learning (CL). This framework extracts the fundamental logic
the implementation of an instruction set architecture (ISA). functionality of netlists using a customized GNN architecture
Due to the vast design space, an efficient design space explo- designed specifically for circuit representation learning, thereby
ration (DSE) technique, coupled with feature engineering, is improving efficiency in logic synthesis.
promising for microarchitecture design. One notable approach Machine-learning techniques, such as CNN and combina-
in this domain is BOOM-Explorer [4], which utilizes a novel tions of CNN and GNN, have also found applications in the
Gaussian process model with deep kernel learning functions back-end EDA flow, i.e., placement and routing. To accelerate
placement, DREAMPlace [9] tackles the analytical placement
problem by analogizing it to training a neural network, which
979-8-3503-0955-3/23/$31.00 ©2023 IEEE enables more efficient placement algorithms. Liu et al. [10]
Authorized licensed use limited to: XIDIAN UNIVERSITY. Downloaded on December 02,2024 at 02:45:39 UTC from IEEE Xplore. Restrictions apply.
employed a CNN to predict congestion hotspots, which are 75%
Nb. of Pubs.
2019
then integrated into a placement engine to achieve more route- 50% 2020
friendly results. Routing is typically the most time-consuming 2021
25%
task in physical design, and it often requires a combination 2022
of ML models and traditional algorithms for effective solu- 0%
lar Image t r
Tab
u Tex Graph Othe
tions. For instance, Qu et al. [11] proposed a reinforcement
learning (RL)–based algorithm that learns an ordering policy Fig. 2 Recent years show a noticeable trend toward a high
to minimize design rule check violations based on net features. proportion of tabular data, accompanied by an increase in image
To reduce turnaround time at the pre-routing stage, Liu et and graph data.
al. [12] designed a concurrent learning-assist early-stage timing
optimization framework called TSteiner. They utilize a cus- 3. Decision-making emerges as the primary focus in recent
tomized GNN to obtain sign-off timing optimization gradients, years, with proportions of 50.00% in 2019, 56.00% in 2020,
which guide the refinement of Steiner points. These approaches and 34.78% in 2022. Classification and regression are identified
demonstrate the potential of combining ML methodologies with as other essential research objectives, with relatively balanced
traditional algorithms in back-end EDA tasks. proportions of 37.50% each in 2019 and 2021. However, there
has been a notable increase in the proportion of regression,
II. H OW M ACHINE L EARNING I NTEGRATED reaching 32.00% in 2020 and 52.17% in 2022. This suggests a
A. Methodologies Perspective growing focus on utilizing machine learning for classification,
There are significant disparities between traditional EDA regression analysis, prediction, and modeling in EDA applica-
methodologies and ML approaches. Traditional EDA encom- tions. Lastly, generation represents a relatively small research
passes techniques such as placement, routing, synthesis, simula- area, with proportions peaking at 8.70% in 2022.
tion, and more. These methods demonstrate robustness through 75%
Nb. of Pubs.
the analysis of optimality in known problems. They require less 2019
50% 2020
training data, exhibit solvability in known problem domains, 2021
and offer good interpretability. However, when confronted with 25%
2022
dynamic and complex problems, traditional EDA methods may 0%
oversimplify the problem, leading to suboptimal solutions. ision on on
cati egressi enerati
on
Dec ssifi R
In contrast, machine learning methods encompass super- Cla G
vised learning [13], unsupervised learning [8], [14], and rein- Fig. 3 The observed trend reflects a strong interest for using
forcement learning techniques [15]. Machine learning methods machine learning for decision support, classification, and re-
are amenable to parallel computation using GPUs [16], [17], gression analysis.
facilitating the efficient design and end-to-end training for
complex problems. Leveraging data, machine learning methods C. Learning Closure Perspective
demonstrate the ability to solve a wide range of problems. How- In traditional machine learning approaches, data is typically
ever, they are susceptible to overreliance on data, potentially divided into training and testing sets for model evaluation.
underutilizing the inherent mechanisms and characteristics of However, this approach has limitations when applied to EDA
the problem. scenarios. To successfully introduce AI-accelerated EDA tech-
B. Task Type Perspective nologies to the market, a different machine learning paradigm,
as shown in Fig. 4, is required.
Over time, there have been noticeable trends in the applica-
tion of machine learning in the field of EDA. Initial Design of
Accuracy-aware Learning Data Augmentation
Experiments
Fig. 2 categorizes articles from recent MLCAD conferences,
providing insights into the changing proportions of research on Self-Verification
different data types. Notably, studies on tabular data consis-
tently maintained a high proportion, accounting for 50.00% in Knowledge Extraction
2019 and 68.75% in 2021. Furthermore, there is an increasing Fig. 4 Adaptive integrated machine learning paradigm.
trend in research on image data, with proportions reaching
24.00% in 2020 and 26.09% in 2022. We also observed signif- At the beginning of this process, it is crucial to design
icant growth in research related to graph data, with respective initial experiments and establish a data acquisition loop for the
increases of 12.00% in 2019 and 13.50% in 2022 compared targeted EDA application. The subsequent training process aims
to the previous years. In 2022, graph data research accounted to achieve the desired accuracy, which aims to dynamically
for 26.09%, positioning it as an equally important focus as adjust the training based on the accuracy requirements of
image data. While the proportion of research on textual data is the model to align the preset requirements. Additionally, this
relatively low, its significance should not be underestimated. process is accompanied by a self-validation phase that takes
We next delve into the different purposes of machine learning into account out-of-distribution factors, such as PVT variations,
algorithms applied to EDA, as observed in recent MLCAD tails of Monte Carlo distributions, and more. Based on the
conferences. The purposes are categorized as shown in Figure models’ accuracy and robustness, further data augmentation is
Authorized licensed use limited to: XIDIAN UNIVERSITY. Downloaded on December 02,2024 at 02:45:39 UTC from IEEE Xplore. Restrictions apply.
A
TABLE I Comparison among netlist representation methods
I1
B
similar B Characteristic Deep DAG AIG Functional Relative-similarity
semantic I2 C
A Close ShapeHashing [23] % ! % % %
I1 Expected embeddings
GraphSage [19] ! % % % %
A ABGNN [14] ! ! % % %
I2 Distant
I1 C FGNN [8] ! ! % ! !
different C
B DeepGate [24] ! ! ! ! %
semantic
I2 Acquired embeddings
transcend the constraint of structural instability, enabling the
Fig. 5 Illustration of the main challenge for netlist representa- acquisition of netlist representations that possess broader ap-
tion learning. plicability and robustness. Encouragingly, recent endeavors
have been directed towards enhancing the quality of netlist
performed to enhance the model’s precision and generalization.
representations with an emphasis on better generalization ca-
After completing the closed-loop training process, the extrac-
pabilities. For instance, Wang et al. [8] introduce a novel
tion of relevant knowledge becomes paramount. The extracted
self-supervised netlist representation learning flow that aspires
knowledge is then fed back to design teams to further improve
to learn universal netlist knowledge. By utilizing contrastive
the design process.
By adopting this machine learning paradigm, a wide range of learning, netlists/gates with similar functionality (semantic) will
chip design challenges can be effectively addressed, surpassing be drawn closer in the representation space, while those with
the traditional “train and test” paradigm. distinct functionality will be pushed away. The results presented
by them demonstrate the notable superiority of functionality-
III. U NIQUE C HALLENGES IN EDA based netlist representations compared with structural ones,
Although so many machine learning algorithms have been particularly when generalizing to unseen data. Similarly, Deep-
deployed in various EDA stages, there are still some unique Gate [24] utilizes signal probability (i.e., the probability of
challenges that need to be solved. being logic ‘1’) for every gate as supervision to learn the logic
functionality of netlists. To bolster the generalization potential
A. Better Netlist Representations
of their model, the authors further transform the netlists into a
Due to the graph nature of netlists, the emerging Graph unified form, and-invertor graphs (AIGs). Nevertheless, repre-
Neural Networks (GNNs) have become the top choice for netlist sentations learned through signal probability can not model the
representation learning [18]. A conventional GNN [19] follows relative distance between different logic functions. TABLE I
an iterative neighborhood aggregation scheme to capture the summarizes the comparison between different netlist represen-
structural information within nodes’ neighborhoods. In recent tation learning methods. In sum, recent advances showcase a
years, we have seen a wide application of GNN in many netlist- promising trajectory toward more adaptive and robust netlist
level tasks, e.g., testability analysis [20], reverse engineer- representations.
ing [14], [21], power estimation [22], etc. While these GNN-
driven works have achieved promising results compared with B. Timing Modeling
traditional methods, they are far from competent for learning Fig. 6 illustrates one timing path where one signal propagates
high-quality netlist representations, as pointed out by previous from the startpoint to the endpoint. A timing path contains
studies [8]. The main challenge faced by the conventional GNN many cells and wires. During timing analysis, it is necessary
methods is their limited generalization capacity, stemming from to give accurate cell and wire delay results in a fast way.
the inherent instability of netlist structures. In particular, the Traditionally, cell timing is calculated based on look-up-tables,
netlist (graph) structure of a given circuit might vary greatly including nonlinear cell delay model (NLDM) and current
with respect to different technology nodes, cell libraries, or source model (CSM) [25], and wire timing is computed based
even logic synthesis tools. on analytical models, including Elmore model [26] and D2M
Fig. 5 gives an example to illustrate the limitation of con- model [27]. However, the accuracy and efficiency of traditional
ventional GNNs when dealing with netlists, given the fact methods cannot meet timing sign-off requirements in advanced
that semantic (logic functionality) and structural information technologies. Some simple machine learning models, such as
of netlists may conflict with each other. We consider three XGBoost and random forest, helped to solve some timing
distinct netlists, denoted as A, B, and C, respectively. In this modeling problems, such as cell delay modeling [28], wire
context, both A and B implement the same function and share delay modeling [29], path-based timing analysis [30], and
akin semantics. As a consequence, they should ideally manifest routing-free timing analysis [31]. However, it is still hard for
proximity within the representation space. However, structural simple learning models to capture structural information, which
methods would push their representations apart, driven by their limits their accuracy [32].
disparate structures. Conversely, A and C implement different To solve the issue, timing modeling has stepped into the
functions, thus diverging in terms of their underlying semantics. graph-learning era [32]–[35]. Netlists are described as graphs
Nevertheless, their representations would be pulled close by where cells are nodes and wires are edges. Popular graph
structural methods based on their similar structures. learning methods are used to learn information from circuit
As evidenced by the above example, the main challenge structure through aggregating information from local neighbors.
at hand centers around developing methodologies that can They can achieve node embedding and graph embedding. For
Authorized licensed use limited to: XIDIAN UNIVERSITY. Downloaded on December 02,2024 at 02:45:39 UTC from IEEE Xplore. Restrictions apply.
Timing path Congestion Map
VDD VDD
Fusion
Signal A1
A1 Wire B Z
R R A1 Z A2 Cell 3
R R ZN Wire A A2 Cell 2
A1
T1
A2
T2 A2 Cell 1
ZN
A1 R C
T3 Wire path
Downsample
A2 R C
T4
Loop
Authorized licensed use limited to: XIDIAN UNIVERSITY. Downloaded on December 02,2024 at 02:45:39 UTC from IEEE Xplore. Restrictions apply.
that a method does not rely on the time-consuming trial global Binarization
routing process. Among these methods, Lay-Net [47] shows
superior performance, highlighting the importance of layout- Continuous
netlist information fusion and multi-scale feature extraction in Model
congestion prediction.
(a)
D. Constrained AIGC
Discrete
Differing from the general concept of artificial intelligence Model
generated content (AIGC), the generated content within the
EDA domain is typically subject to additional design rules. (b)
Evaluating the quality of models relies significantly on the
Fig. 8 Illustration of the discrete state space constraint in layout
legality of the output. A prime example of constrained AIGC
pattern generation. (a) The binarization on continuous model
in EDA pertains to the generation of layout patterns. The
output may lead to information loss; (b) Discrete model can
establishment of reliable layout pattern libraries serves as the
output discrete samples directly.
cornerstone for diverse designs for manufacturability research.
With the escalating demand for layout patterns in lithogra- pursuit of a more robust constrained AIGC approach within
phy design applications based on machine learning [52]–[54], EDA remains a persistently challenging and evolving endeavor.
constructing a feasible large-scale pattern library could prove
highly time-consuming due to the extended logic-to-chip design IV. C ONCLUSION AND F UTURE D IRECTION
cycle. Recent literature has proposed several learning-based Recent advancements have witnessed the integration of ML
methods for generating layout patterns, such as [55]–[58]. To into EDA, a merger that has promised and delivered notable
fit the latent distribution of layout patterns and generate novel improvements in the design flow. This incorporation has been
instances, famous generative models in computer vision domain marked by successful outcomes in classification, detection, and
have been introduced in pattern generation task. However, two design space exploration challenges. Despite the advancements
predominant constraints differentiate layout pattern generation and numerous ML algorithms introduced into EDA, the path
from conventional image generation. forward still presents unique obstacles, like the development
The first significant constraint pertains to the discrete nature of better netlist representations and addressing issues related to
of layout patterns, as depicted in Fig. 8. In a layout pattern, timing modeling, netlist-layout multimodality, and constrained
the state of each pixel is binary, while prevailing image gen- content generation.
eration techniques are designed for continuous state spaces. One future direction is applying large language models
To transform a continuous model output to a layout pattern, (LLMs) into the EDA flow, harnessing their analytical capabil-
some existing works [55], [56] transform these into layout ities to optimize chip design processes, and revolutionize the
topologies through binary truncation upon generating contin- way EDA flows are conceptualized and implemented. Interested
uous examples. However, such truncation could potentially readers may refer to [59], [60] for more discussions and
compromise the model’s capacity. Since the details of model explorations.
prediction is removed in the truncation process. To address this
ACKNOWLEGEMENT
concerns, DiffPattern [58] introduces a practical framework for
layout pattern generation. Through the application of a discrete The author thanks many students and collaborators, who have
diffusion model, DiffPattern confines each entry’s state within helped to develop the works and perspectives given in this
a pre-defined discrete state space. As a result, DiffPattern can paper: Guojin Chen, Hongduo Liu, Zixiao Wang, Ziyi Wang,
directly generate discrete layout patterns without clipping. Peng Xu, Yuyang Ye, Yu Zhang, Su Zheng.
On a different note, the generated layout patterns are required R EFERENCES
to follow the design rules, which makes the layout pattern [1] M. Rapp, H. Amrouch, Y. Lin, B. Yu, D. Z. Pan, M. Wolf, and J. Henkel,
generation more challenging. To prevent the production of illicit “MLCAD: A survey of research in machine learning for CAD keynote
layout patterns that contravene design rules, [56], [57] derive paper,” IEEE TCAD, vol. 41, no. 10, pp. 3162–3181, 2021.
[2] G. Huang, J. Hu, Y. He, J. Liu, M. Ma, Z. Shen, J. Wu, Y. Xu, H. Zhang,
latent regularization from the training dataset. However, the K. Zhong et al., “Machine learning for electronic design automation: A
implicit constraint learned from this training set might lack survey,” ACM TODAES, vol. 26, no. 5, pp. 1–46, 2021.
flexibility and reliability. Beyond the inconvenience of needing [3] T. Chen, G. L. Zhang, B. Yu, B. Li, and U. Schlichtmann, “Machine
learning in advanced IC design: A methodological survey,” IEEE MDAT,
to train a new model on a specific dataset that adheres to vol. 40, no. 1, pp. 17–33, 2022.
updated design rules, a significant proportion of the generated [4] C. Bai, Q. Sun, J. Zhai, Y. Ma, B. Yu, and M. D. Wong, “BOOM-Explorer:
patterns violate these rules. Addressing these concerns, Diff- RISC-V BOOM microarchitecture design space exploration framework,”
in Proc. ICCAD, 2021.
Pattern [58] devises a nonlinear system capable of identifying [5] Q. Sun, T. Chen, S. Liu, J. Miao, J. Chen, H. Yu, and B. Yu, “Correlated
a legal solution for each topology matrix. This system can be multi-objective multi-fidelity optimization for hls directives design,” in
easily adjusted to accommodate various design rules. Proc. DATE, 2021.
[6] L. Ferretti, A. Cini, G. Zacharopoulos, C. Alippi, and L. Pozzi, “Graph
While the realm of constrained AIGC in the context of EDA neural networks for high-level synthesis design space exploration,” ACM
has experienced substantial exploration in recent literature, the TODAES, vol. 28, no. 2, pp. 1–20, 2022.
Authorized licensed use limited to: XIDIAN UNIVERSITY. Downloaded on December 02,2024 at 02:45:39 UTC from IEEE Xplore. Restrictions apply.
[7] C. Xu, C. Kjellqvist, and L. W. Wills, “Sns’s not a synthesizer: A deep- [36] Y. Ye, T. Chen, Z. Wang, H. Yan, B. Yu, and L. Shi, “Fast and accurate
learning-based synthesis predictor,” in Proc. ISCA, 2022. aging-aware cell timing model via graph learning,” IEEE TCAS II, 2023.
[8] Z. Wang, C. Bai, Z. He, G. Zhang, Q. Xu, T.-Y. Ho, B. Yu, and Y. Huang, [37] Y. Ye, T. Chen, Y. Gao, H. Yan, B. Yu, and L. Shi, “Fast and accurate
“Functionality matters in netlist representation learning,” in Proc. DAC, wire timing estimation based on graph learning,” in Proc. DATE, 2023.
2022. [38] C. Yu and Z. Zhang, “Painting on placement: Forecasting routing conges-
[9] Y. Lin, S. Dhar, W. Li, H. Ren, B. Khailany, and D. Z. Pan, “DREAM- tion using conditional generative adversarial nets,” in Proc. DAC, 2019.
Place: Deep learning toolkit-enabled GPU acceleration for modern VLSI [39] W. Li, G. Chen, H. Yang, R. Chen, and B. Yu, “Learning point clouds in
placement,” in Proc. DAC, 2019. eda,” in Proc. ISPD, 2021.
[10] S. Liu, Q. Sun, P. Liao, Y. Lin, and B. Yu, “Global placement with deep [40] B. Wang, G. Shen, D. Li, J. Hao, W. Liu, Y. Huang, H. Wu, Y. Lin,
learning-enabled explicit routability optimization,” in Proc. DATE, 2021. G. Chen, and P. A. Heng, “LHNN: Lattice hypergraph neural network
[11] T. Qu, Y. Lin, Z. Lu, Y. Su, and Y. Wei, “Asynchronous reinforcement for VLSI congestion prediction,” in Proc. DAC, 2022.
learning framework for net order exploration in detailed routing,” in [41] S. Zheng, L. Zou, S. Liu, Y. Lin, B. Yu, and M. D. F. Wong, “Mitigating
Proc. DATE, 2021. distribution shift for congestion optimization in global placement,” in
[12] S. Liu, Z. Wang, F. Liu, Y. Lin, B. Yu, and M. Wong, “Concurrent sign- Proc. DAC, 2023.
off timing optimization via deep steiner points refinement,” in Proc. DAC, [42] Z. Xie, Y.-H. Huang, G.-Q. Fang, H. Ren, S.-Y. Fang, Y. Chen, and
2023. J. Hu, “RouteNet: Routability prediction for mixed-size designs using
[13] Z. Wang, S. Liu, Y. Pu, S. Chen, T.-Y. Ho, and B. Yu, “Realistic sign-off convolutional neural network,” in Proc. ICCAD, 2018.
timing prediction via multimodal fusion,” in Proc. DAC, 2023. [43] K. Baek, H. Park, S. Kim, K. Choi, and T. Kim, “Pin accessibility and
[14] Z. He, Z. Wang, C. Bai, H. Yang, and B. YU, “Graph learning-based routing congestion aware DRC hotspot prediction using graph neural
arithmetic block identification,” in Proc. ICCAD, 2021. network and U-Net,” in Proc. ICCAD, 2022.
[15] Z. Pei, F. Liu, Z. He, G. Chen, H. Zheng, K. Zhu, and B. Yu, “AlphaSyn: [44] R. Liang, H. Xiang, J. Jung, J. Hu, and G.-J. Nam, “A stochastic approach
Logic synthesis optimization with efficient monte carlo tree search,” in to handle non-determinism in deep learning-based design rule violation
Proc. ICCAD, 2023. predictions,” in Proc. ICCAD, 2022.
[16] Z. Yu, G. Chen, Y. Ma, and B. Yu, “A GPU-enabled level set method for [45] E. C. Barboza, N. Shukla, Y. Chen, and J. Hu, “Machine learning-based
mask optimization,” in Proc. DATE, 2021. pre-routing timing prediction with reduced pessimism,” in Proc. DAC,
[17] G. Chen, Z. Yu, H. Liu, Y. Ma, and B. Yu, “DevelSet: Deep neural level 2019.
set for instant mask optimization,” in Proc. ICCAD, 2021. [46] X. He, Z. Fu, Y. Wang, C. Liu, and Y. Guo, “Accurate timing prediction
[18] G. Huang, J. Hu, Y. He, J. Liu, M. Ma, Z. Shen, J. Wu, Y. Xu, H. Zhang, at placement stage with look-ahead RC network,” in Proc. DAC, 2022.
K. Zhong et al., “Machine learning for electronic design automation: A [47] S. Zheng, L. Zou, P. Xu, S. Liu, B. Yu, and M. D. F. Wong, “Lay-
survey,” ACM TODAES, vol. 26, no. 5, pp. 1–46, 2021. net: Grafting netlist knowledge on layout-based congestion prediction,”
[19] W. Hamilton, Z. Ying, and J. Leskovec, “Inductive representation learning in Proc. ICCAD, 2023.
on large graphs,” in Proc. NIPS, 2017. [48] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin
[20] Y. Ma, H. Ren, B. Khailany, H. Sikka, L. Luo, K. Natarajan, and B. Yu, transformer: Hierarchical vision transformer using shifted windows,” in
“High performance graph convolutional networks with applications in Proc. CVPR, 2021.
testability analysis,” in Proc. DAC, 2019. [49] C.-C. Chang, J. Pan, T. Zhang, Z. Xie, J. Hu, W. Qi, C.-W. Lin,
[21] L. Alrahis, A. Sengupta, J. Knechtel, S. Patnaik, H. Saleh, B. Mohammad, R. Liang, J. Mitra, E. Fallon, and Y. Chen, “Automatic routability
M. Al-Qutayri, and O. Sinanoglu, “Gnn-re: Graph neural networks for predictor development using neural architecture search,” in Proc. ICCAD,
reverse engineering of gate-level netlists,” IEEE TCAD, vol. 41, no. 8, 2021.
pp. 2435–2448, 2021. [50] A. Ghose, V. Zhang, Y. Zhang, D. Li, W. Liu, and M. Coates, “Gener-
[22] Y. Zhang, H. Ren, and B. Khailany, “Grannite: Graph neural network alizable cross-graph embedding for gnn-based congestion prediction,” in
inference for transferable power estimation,” in Proc. DAC, 2020. Proc. ICCAD, 2021.
[23] W. Li, A. Gascon, P. Subramanyan, W. Y. Tan, A. Tiwari, S. Malik, [51] Z. Yang, D. Li, Y. Zhang, Z. Zhang, G. Song, J. Hao et al.,
N. Shankar, and S. A. Seshia, “Wordrev: Finding word-level structures “Versatile multi-stage graph neural network for circuit representation,”
in a sea of bit-level gates,” in Proc. HOST, 2013. Proc. NeurIPS, vol. 35, pp. 20 313–20 324, 2022.
[24] M. Li, S. Khan, Z. Shi, N. Wang, H. Yu, and Q. Xu, “Deepgate: Learning [52] G. Chen, W. Chen, Q. Sun, Y. Ma, H. Yang, and B. Yu, “DAMO: Deep
neural representations of logic gates,” in Proc. DAC, 2022. agile mask optimization for full-chip scale,” IEEE TCAD, vol. 41, no. 9,
[25] Synopsys, “PrimeTime user guide,” https://www.synopsys.com/cgi-bin/ pp. 3118–3131, 2022.
imp/pdfdla/pdfr1.cgi?file=primetime-wp.pdf, 2023. [53] G. Chen, Z. Pei, H. Yang, Y. Ma, B. Yu, and M. Wong, “Physics-
[26] W. C. Elmore, “The transient response of damped linear networks with informed optical kernel regression using complex-valued neural fields,”
particular regard to wideband amplifiers,” Journal of applied physics, in Proc. DAC, 2023.
vol. 19, no. 1, pp. 55–63, 1948. [54] W. Zhao, X. Yao, Z. Yu, G. Chen, Y. Ma, B. Yu, and M. D. F. Wong,
[27] C. J. Alpert, A. Devgan, and C. Kashyap, “A two moment RC delay “AdaOPC: A self-adaptive mask optimization framework for real design
metric for performance optimization,” in Proc. ISPD, 2000. patterns,” in Proc. ICCAD, 2022.
[28] S. M. Ebrahimipour, B. Ghavami, H. Mousavi, M. Raji, Z. Fang, and [55] H. Yang, P. Pathak, F. Gennari, Y.-C. Lai, and B. Yu, “Deepattern:
L. Shannon, “Aadam: a fast, accurate, and versatile aging-aware cell Layout pattern generation with transforming convolutional auto-encoder,”
library delay model using feed-forward neural network,” in Proc. ICCAD, in DAC, 2019.
2020. [56] X. Zhang, J. Shiely, and E. F. Young, “Layout pattern generation and
[29] H.-H. Cheng, I. H.-R. Jiang, and O. Ou, “Fast and accurate wire timing legalization with generative learning models,” in ICCAD, 2020.
estimation on tree and non-tree net structures,” in Proc. DAC, 2020. [57] L. Wen, Y. Zhu, L. Ye, G. Chen, B. Yu, J. Liu, and C. Xu, “Layoutrans-
former: Generating layout patterns with transformer via sequential pattern
[30] A. B. Kahng, U. Mallappa, and L. Saul, “Using machine learning to pre-
modeling,” in ICCAD, 2022.
dict path-based slack from graph-based timing analysis,” in Proc. ICCD,
[58] Z. Wang, Y. Shen, W. Zhao, Y. Bai, G. Chen, F. Farnia, and B. Yu,
2018.
“Diffpattern: Layout pattern generation via discrete diffusion,” arXiv
[31] D. Hyun, Y. Fan, and Y. Shin, “Accurate wirelength prediction for
preprint arXiv:2303.13060, 2023.
placement-aware synthesis through machine learning,” in Proc. DATE,
[59] J. Blocklove, S. Garg, R. Karri, and H. Pearce, “Chip-Chat: Challenges
2019.
and Opportunities in Conversational Hardware Design,” in Proc. MLCAD,
[32] R. Liang, Z. Xie, J. Jung, V. Chauha, Y. Chen, J. Hu, H. Xiang, and G.-J.
2023.
Nam, “Routing-free crosstalk prediction,” in Proc. ICCAD, 2020.
[60] Z. He, H. Wu, X. Zhang, X. Yao, S. Zheng, H. Zheng, and B. Yu,
[33] Z. Xie, R. Liang, X. Xu, J. Hu, Y. Duan, and Y. Chen, “Net2: A
“ChatEDA: A Large Language Model Powered Autonomous Agent for
graph attention network method customized for pre-placement net length
EDA,” in Proc. MLCAD, 2023.
estimation,” in Proc. ASPDAC, 2021.
[34] Z. Guo, M. Liu, J. Gu, S. Zhang, D. Z. Pan, and Y. Lin, “A timing engine
inspired graph neural network model for pre-routing slack prediction,” in
Proc. DAC, 2022.
[35] K. K.-C. Chang, C.-Y. Chiang, P.-Y. Lee, and I. H.-R. Jiang, “Timing
macro modeling with graph neural networks,” in Proc. DAC, 2022.
Authorized licensed use limited to: XIDIAN UNIVERSITY. Downloaded on December 02,2024 at 02:45:39 UTC from IEEE Xplore. Restrictions apply.