Gan Opc
Gan Opc
Authorized licensed use limited to: Chinese University of Hong Kong. Downloaded on September 24,2020 at 00:45:56 UTC from IEEE Xplore. Restrictions apply.
YANG et al.: GAN-OPC: MASK OPTIMIZATION WITH LITHOGRAPHY-GUIDED GENERATIVE ADVERSARIAL NETS 2823
TABLE I
Generative adversarial networks (GANs) have shown pow- S YMBOLS AND N OTATIONS U SED T HROUGHOUT T HIS A RTICLE
erful generality when learning the distribution of a given
dataset [26]–[28]. The basic optimization flow of GAN con-
tains two networks interacting with each other. The first
one is called generator that takes random vectors as input
and generates samples which are as much closer to the true
dataset distribution as possible. The second one is called dis-
criminator that tries to distinguish the true dataset from the
generated samples. At convergence, ideally, the generator is
expected to generate samples that have the same distribution
as true dataset. Inspired by the generative architecture and
the adversarial training strategy, in this article, we propose a
lithography-guided generative framework that can synthesize
quasi-optimal mask with single round forwarding calculation.
The quasi-optimal mask can be further refined by few steps
of normal OPC engine. It should be noted conventional GAN
cannot be directly applied here, due to the following two 4) We enhance the GAN-OPC flow by integrating a U-Net
reasons. and an SPSR structure into the generator that promise
1) Traditional DCGANs [28] are trained to mimic a dataset better model convergence and generated mask quality.
distribution which is not enough for the target-mask 5) The experimental results show that our framework can
mapping procedure. significantly facilitate the mask optimization procedure
2) Compensation patterns or segment movements in the as well as generating mask that has better printability
mask are derived-based upon a large area of local pat- under nominal condition.
terns (e.g., 1000 × 1000 nm2 ) that brings much training The rest of this article is organized as follows. Section II
pressure on the generator. lists basic concepts and problem formulation. Section III
In accordance with these problems, we develop customized discusses the details of GAN-OPC framework, ILT-guided
GAN training strategies for the purpose of mask optimization. training strategies, and enhanced GAN-OPC (EGAN-OPC)
Besides, since layout topology types are limited within spe- framework with the U-Net and SPSR techniques. Section IV
cific area, we automatically synthesize local topology patterns presents the experimental results, followed by the conclusion
based on size and spacing rules. The benefits of the arti- in Section V.
ficial patterns are twofold: 1) we avoid to train the neural
network with large images and facilitate the training procedure II. P RELIMINARIES
significantly and 2) automatically designed patterns are dis-
In this section, we will discuss some preliminaries of the
tributed uniformly and to some extent alleviate the over-fitting
mask optimization and the generative adversarial nets. Major
problem. Observing that most ILTs update the mask through
math symbols with their descriptions are listed in Table I. In
steepest descent that resembles the training procedure in neu-
order to avoid confusion, all the norms || · || are calculated
ral networks, we connect an ILT structure with the generative
with respect to flattened vectors.
networks and pretrain the generator through backpropagat-
The Hopkins theory of the partial coherence imaging system
ing the lithography error to neuron weights. With the above
has been widely applied to mathematically analyze the mask
pretraining phase, the generative model converges faster than
behavior of lithography [31]. Because the Hopkins diffraction
training from random initialized neuron weights. Observe that
model is complex and not computational friendly, [32] adopts
GANs are typically much deeper than regular neural networks
the singular value decomposition (SVD) to approximate the
which bring inevitable training challenges. We further enhance
original model with a weighted summation of the coherent
the framework with an advanced generator design that inte-
systems
grates the U-Net [29] and the subpixel super-resolution (SPSR)
2
structure [30] which are more computationally efficient,
N
promise faster convergence, and provide better mask image I= wk |M ⊗ hk |2 (1)
quality. The main contributions of this article are listed as k=1
follows. where hk and wk are the kth kernel and its weight. As sug-
1) We synthesize training patterns to enhance the compu- gested in [13], we pick the Nhth order approximation to the
tational efficiency and alleviate the over-fitting problem. system. Equation (1) becomes
2) We propose an ILT-guided pretraining flow to initialize
the generator which can effectively facilitate the training
Nh
procedure. I= wk |M ⊗ hk |2 . (2)
3) We design new objectives of the discriminator to make k=1
sure the model is trained toward a target-mask mapping The lithography intensity corresponds to the exposure level
instead of a distribution. on the photoresist that controls the final wafer image. In real
Authorized licensed use limited to: Chinese University of Hong Kong. Downloaded on September 24,2020 at 00:45:56 UTC from IEEE Xplore. Restrictions apply.
2824 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 39, NO. 10, OCTOBER 2020
Authorized licensed use limited to: Chinese University of Hong Kong. Downloaded on September 24,2020 at 00:45:56 UTC from IEEE Xplore. Restrictions apply.
YANG et al.: GAN-OPC: MASK OPTIMIZATION WITH LITHOGRAPHY-GUIDED GENERATIVE ADVERSARIAL NETS 2825
Authorized licensed use limited to: Chinese University of Hong Kong. Downloaded on September 24,2020 at 00:45:56 UTC from IEEE Xplore. Restrictions apply.
2826 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 39, NO. 10, OCTOBER 2020
where Zt is the target and Z is the wafer image of a given mask. (b)
Because mask and wafer images are regarded as continuously
Fig. 5. (a) GAN-OPC training and (b) ILT-guided pretraining.
valued matrices in the ILT-based optimization flow, we apply
translated sigmoid functions to make the pixel values close to
either 0 or 1 the input layer while neural weights are updated as follows:
Z=
1 λ
1 + exp [−α × (I − Ith )]
(12) Wg = Wg − W g (15)
m
1 where W g is accumulated gradient of a mini-batch of
Mb = (13)
1 + exp(−β × M) instances and m is the mini-batch instance count. Because (15)
where Ith is the threshold matrix in the constant resist model is naturally compatible with ILT, if we create a link between
with all the entries being Ith , Mb is the incompletely binarized the generator and the ILT engine, the wafer image error can
mask, while α and β control the steepness of relaxed images. be backpropagated directly to the generator as presented in
Combining (1)–(3), (11)–(13), and the analysis in [12], we Fig. 5.
can derive the gradient representation as follows: The generator pretraining phase is detailed in Algorithm 2.
In each pretraining iteration, we sample a mini-batch of target
∂E layouts (line 2) and initialize the gradients of the generator
= 2αβ × Mb (1 − Mb )
∂M W g to zero (line 3); the mini-batch is fed into the generator
(((Z − Zt ) Z (1 − Z) (Mb ⊗ H∗ )) ⊗ H to obtain generated masks (line 5). Each generated mask is
+ ((Z − Zt ) Z (1 − Z) (Mb ⊗ H)) ⊗ H∗ ) (14) loaded into the lithography engine to obtain a wafer image
(line 6); the quality of wafer image is estimated by (11)
where H∗ is the conjugate matrix of the original lithography (line 7); we calculate the gradient of lithography error E with
kernel H. In traditional ILT flow, the mask can be optimized respect to the neural networks parameter W g through the chain
through iteratively descending the gradient until E is below a rule, i.e., (∂E/∂M)(∂M/∂W g ) (line 8); finally, W g is updated
threshold. following the gradient descent procedure (line 10).
The objective of mask optimization problem indicates the Compared to the training toward ground truth (i.e., directly
generator is the most critical component in GAN. Observing backpropagating the mask error to neuron weights), ILT-
that both ILT and neural network optimization share simi- guided pretraining provides step-by-step guidance when
lar gradient descent procedure, we propose a jointed training searching for a solution with high quality, which reduces
algorithm that takes advantages of ILT engine, as depicted in the possibility of the generator being stuck at local mini-
Fig. 5(b). We initialize the generator with lithography-guided mum region in an early training stage. Because ILT contains
pretraining to make it converge well in the GAN optimization complicated convolutions and matrix multiplications that are
flow thereafter. The key step of neural network training is computational expensive, we approximate the pretraining stage
backpropagating the training error from the output layer to through backpropagating errors of intermediate masks, which
Authorized licensed use limited to: Chinese University of Hong Kong. Downloaded on September 24,2020 at 00:45:56 UTC from IEEE Xplore. Restrictions apply.
YANG et al.: GAN-OPC: MASK OPTIMIZATION WITH LITHOGRAPHY-GUIDED GENERATIVE ADVERSARIAL NETS 2827
Authorized licensed use limited to: Chinese University of Hong Kong. Downloaded on September 24,2020 at 00:45:56 UTC from IEEE Xplore. Restrictions apply.
2828 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 39, NO. 10, OCTOBER 2020
(a) (b)
(a) (b)
Fig. 8. Patterns generated from (a) deconvolution layers and (b) SPSR layers.
TABLE II
G ENERATOR C ONFIGURATION
Authorized licensed use limited to: Chinese University of Hong Kong. Downloaded on September 24,2020 at 00:45:56 UTC from IEEE Xplore. Restrictions apply.
YANG et al.: GAN-OPC: MASK OPTIMIZATION WITH LITHOGRAPHY-GUIDED GENERATIVE ADVERSARIAL NETS 2829
TABLE IV
D ESIGN RULES U SED
(a) (b)
Fig. 10. Example of (a) target and (b) reference mask pair.
Authorized licensed use limited to: Chinese University of Hong Kong. Downloaded on September 24,2020 at 00:45:56 UTC from IEEE Xplore. Restrictions apply.
2830 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 39, NO. 10, OCTOBER 2020
TABLE V
C OMPARISON W ITH S TATE - OF - THE -A RT
(a) (b)
Fig. 14. Some wafer image details of (a) ILT [13] and (b) PGAN-OPC.
Fig. 15. Training behavior of the EGAN-OPC framework with faster and
better convergence.
Authorized licensed use limited to: Chinese University of Hong Kong. Downloaded on September 24,2020 at 00:45:56 UTC from IEEE Xplore. Restrictions apply.
YANG et al.: GAN-OPC: MASK OPTIMIZATION WITH LITHOGRAPHY-GUIDED GENERATIVE ADVERSARIAL NETS 2831
Fig. 16. Result visualization of PGAN-OPC, EGAN-OPC, and ILT. Columns correspond to ten test cases from ICCAD 2013 CAD contest. Rows from top
to bottom are: (a) masks of [13]; (b) wafer images by masks of [13]; (c) masks of PGAN-OPC; (d) wafer images by masks of PGAN-OPC; (e) masks of
Enhanced GAN-OPC; (f) wafer images by masks of Enhanced GAN-OPC; and (g) target patterns.
suffer less proximity effects while inducing bridge or line-end results can also be found in column “EGAN-OPC” of Table V.
pull back defects, as shown in Fig. 14. EGAN-OPC outperforms PGAN-OPC and GAN-OPC on most
test cases with better L2 error (39 500 versus 39 948) and
C. Evaluation of Enhanced GAN-OPC smaller PVB area (48 917 nm2 versus 49 957 nm2 ) with only
70% average runtime of PGAN-OPC (see Fig. 13), which
Here, we show the effectiveness and the efficiency of the demonstrates the efficiency of EGAN-OPC framework. It
EGAN-OPC framework. In the first experiment, we illustrate should be also noted that EGAN-OPC can be trained end-to-
the training behavior of PGAN-OPC and the EGAN-OPC end without any interaction with the lithography engine which
frameworks as shown in Fig. 15. Red curve stands for the induces a large amount of computational cost in PGAN-OPC.
original PGAN-OPC model, which fluctuates fiercely around
a large value. Dark curve refers to the results with U-Net gen-
erator. Blue curve represents the complete version of enhanced
GAN model with both U-net structure and the embedded SPSR D. On the Scalability of GAN-OPC Family
structure. It is encouraging to see that U-net alone can already In order to verify the scalability of our frameworks, we
ensure a good convergence in terms of L2 loss. As we have conduct further experiments on ten additional testcases that
pointed out in algorithm section, such structure attains the neu- contain more patterns and larger total pattern areas. Similar
ral network capacity with significantly lower computational to [7], these ten testcases are created from the original IBM
cost, which is consistent with the trends of L2 error during benchmarks with additional geometries. The results of one
training. example can be found in Fig. 17. It can be seen that our frame-
In the second experiment, we compare the mask work generalizes to more complex patterns. We also visualize
optimization results of the EGAN-OPC with original GAN- the ILT convergence in terms of different mask initialization
OPC and PGAN-OPC, as depicted in Fig. 16. The quantitative in Fig. 18. Here, we use testcase 18 as an example. It can be
Authorized licensed use limited to: Chinese University of Hong Kong. Downloaded on September 24,2020 at 00:45:56 UTC from IEEE Xplore. Restrictions apply.
2832 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 39, NO. 10, OCTOBER 2020
TABLE VI
E XPERIMENTS ON L ARGER B ENCHMARKS
Fig. 17. Larger-case example of (a) mask pattern, (b) its wafer image, and
the (c) corresponding target pattern.
V. C ONCLUSION
In this article, we have proposed a GAN-based mask
optimization flow that takes target circuit patterns as input
and generates quasi-optimal masks for further ILT refinement.
(a) (b) We analyze the specialty of mask optimization problem and
design OPC-oriented training objectives of GAN. Inspired by
Fig. 18. Visualization of convergence during ILT refinement. (a) L2 and
(b) PV band. the observation that ILT procedure resembles gradient descent
in backpropagation, we develop an ILT-guided pretraining
algorithm that initializes the generator with intermediate ILT
seen that ILT converges much faster when using the mask ini- results, which significantly facilitates the training procedure.
tialized by EGAN-OPC as input, with only ignorable PV band We also enhance the GAN-OPC flow by integrating U-Net and
penalty. We did not compare the performance with the model- SPSR layers in the generator that ensures better model conver-
based OPC as the binary release in [6] encounters unknown gence and mask quality. The experimental results show that
failure on the new benchmarks. our framework not only accelerates ILT but also has the poten-
We list the detailed optimization results in Table VI, where tial to generate better masks through offering better starting
columns are defined exactly the same as Table V. It can points in ILT flow.
be seen that GAN-OPC exhibits tradeoffs on nominal image
quality and PVB compared to pure ILT, while both PGAN- R EFERENCES
OPC and EGAN-OPC show significant advantages on L2
[1] D. Z. Pan, B. Yu, and J.-R. Gao, “Design for manufacturing
error (86 105.7 versus 90 486.3) with similar or slightly better with emerging nanolithography,” IEEE Trans. Comput.-Aided Design
PVB (108 690.7 nm2 versus 109 842.7 nm2 ). Besides, compet- Integr. Circuits Syst., vol. 32, no. 10, pp. 1453–1472, Oct. 2013.
itive results of our framework are also achieved with shorter [2] B. Yu, X. Xu, S. Roy, Y. Lin, J. Ou, and D. Z. Pan, “Design for manufac-
turability and reliability in extreme-scaling VLSI,” Sci. China Inf. Sci.,
optimization time thanks to the good initialization offered by vol. 59, pp. 1–23, Jun. 2016.
the generator, as shown in Fig. 19. [3] ITRS. Accessed: Nov. 7, 2018. [Online]. Available: http://www.itrs.net
Authorized licensed use limited to: Chinese University of Hong Kong. Downloaded on September 24,2020 at 00:45:56 UTC from IEEE Xplore. Restrictions apply.
YANG et al.: GAN-OPC: MASK OPTIMIZATION WITH LITHOGRAPHY-GUIDED GENERATIVE ADVERSARIAL NETS 2833
[4] X. Xu, T. Matsunawa, S. Nojima, C. Kodama, T. Kotani, and D. Z. Pan, [27] M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein generative adver-
“A machine learning based framework for sub-resolution assist fea- sarial networks,” in Proc. Int. Conf. Mach. Learn. (ICML), 2017,
ture generation,” in Proc. ACM Int. Symp. Phys. Design (ISPD), 2016, pp. 214–223.
pp. 161–168. [28] A. Radford, L. Metz, and S. Chintala, “Unsupervised representation
[5] A. Awad, A. Takahashi, S. Tanaka, and C. Kodama, “A fast process learning with deep convolutional generative adversarial networks,” in
variation and pattern fidelity aware mask optimization algorithm,” in Proc. Int. Conf. Learn. Represent. (ICLR), 2016, pp. 1–16.
Proc. IEEE/ACM Int. Conf. Comput.-Aided Design (ICCAD), 2014, [29] O. Ronneberger, P. Fischer, and T. Brox, “U-Net:
pp. 238–245. Convolutional networks for biomedical image segmentation,” in
[6] J. Kuang, W.-K. Chow, and E. F. Y. Young, “A robust approach for Proc. Int. Conf. Med. Image Comput. Comput. Assist. Intervent.
process variation aware mask optimization,” in Proc. IEEE/ACM Design (MICCAI), 2015, pp. 234–241.
Autom. Test Europe (DATE), 2015, pp. 1591–1594. [30] W. Shi et al., “Real-time single image and video super-resolution using
[7] Y.-H. Su, Y.-C. Huang, L.-C. Tsai, Y.-W. Chang, and S. Banerjee, an efficient sub-pixel convolutional neural network,” in Proc. IEEE
“Fast lithographic mask optimization considering process variation,” Conf. Comput. Vis. Pattern Recognit. (CVPR), 2016, pp. 1874–1883.
IEEE Trans. Comput.-Aided Design Integr. Circuits Syst., vol. 35, no. 8, [31] H. H. Hopkins, “The concept of partial coherence in optics,”
pp. 1345–1357, Aug. 2016. Proc. Roy. Soc. London A Math. Phys. Eng. Sci., vol. 208, no. 1093,
[8] P. Yu, S. X. Shi, and D. Z. Pan, “Process variation aware OPC with vari- pp. 263–277, 1951.
ational lithography modeling,” in Proc. ACM/IEEE Design Autom. Conf. [32] N. B. Cobb, “Fast optical and process proximity correction algo-
(DAC), 2006, pp. 785–790. rithms for integrated circuit manufacturing,” Ph.D. dissertation,
Dept. Elect. Eng. Comput. Sci., Univ. California at Berkeley, Berkeley,
[9] J.-S. Park et al., “An efficient rule-based OPC approach using a DRC tool
CA, USA, 1998.
for 0.18 μm ASIC,” in Proc. IEEE Int. Symp. Qual. Electron. Design
[33] S. Banerjee, Z. Li, and S. R. Nassif, “ICCAD-2013 CAD contest
(ISQED), 2000, pp. 81–85.
in mask optimization and benchmark suite,” in Proc. IEEE/ACM
[10] P. Yu, S. X. Shi, and D. Z. Pan, “True process variation aware optical
Int. Conf. Comput.-Aided Design (ICCAD), 2013, pp. 271–274.
proximity correction with variational lithography modeling and model
[34] W.-C. Huang et al., “Two threshold resist models for optical proxim-
calibration,” J. Micro Nanolithography MEMS MOEMS, vol. 6, no. 3,
ity correction,” in Proc. Opt. Microlithography XVII, vol. 5377, 2004,
2007, Art. no. 031004.
pp. 1536–1544.
[11] A. Awad, A. Takahashi, S. Tanaka, and C. Kodama, “A fast process- [35] J. Andres and T. Robles, “Integrated circuit layout design methodology
variation-aware mask optimization algorithm with a novel intensity with process variation bands,” U.S. Patent 8 799 830, Aug. 5, 2014.
modeling,” IEEE Trans. Very Large Scale Integr. (VLSI) Syst., vol. 25, [36] J. Masci, U. Meier, D. C. Cireşan, and J. Schmidhuber, “Stacked
no. 3, pp. 998–1011, Mar. 2017. convolutional auto-encoders for hierarchical feature extraction,” in
[12] A. Poonawala and P. Milanfar, “Mask design for optical Proc. Int. Conf. Artif. Neural Netw. (ICANN), 2011, pp. 52–59.
microlithography—An inverse imaging problem,” IEEE Trans. Image [37] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for
Process., vol. 16, no. 3, pp. 774–788, Mar. 2007. image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit.
[13] J.-R. Gao, X. Xu, B. Yu, and D. Z. Pan, “MOSAIC: Mask opti- (CVPR), 2016, pp. 770–778.
mizing solution with process window aware inverse correction,” in [38] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger,
Proc. ACM/IEEE Design Autom. Conf. (DAC), 2014, pp. 1–6. “Densely connected convolutional networks,” in Proc. IEEE
[14] Y. Ma, J.-R. Gao, J. Kuang, J. Miao, and B. Yu, “A unified frame- Conf. Comput. Vis. Pattern Recognit. (CVPR), 2017, pp. 4700–4708.
work for simultaneous layout decomposition and mask optimization,” [39] K. Simonyan and A. Zisserman, “Very deep convolutional networks for
in Proc. IEEE/ACM Int. Conf. Comput.-Aided Design (ICCAD), 2017, large-scale image recognition,” in Proc. Int. Conf. Learn. Represent.
pp. 81–88. (ICLR), 2015, pp. 1–14.
[15] W. Xiong, J. Zhang, Y. Wang, Z. Yu, and M.-C. Tsai, “A gradient- [40] M. Abadi et al., “TensorFlow: A system for large-scale machine learn-
based inverse lithography technology for double-dipole lithography,” ing,” in Proc. USENIX Symp. Oper. Syst. Design Implement. (OSDI),
in Proc. IEEE Int. Conf. Simulat. Semicond. Processes Devices, 2009, 2016, pp. 265–283.
pp. 1–4.
[16] R. Viswanathan, J. T. Azpiroz, and P. Selvam, “Process optimization
through model based SRAF printing prediction,” in Proc. SPIE
Adv. Lithography, vol. 8326, 2012, Art. no. 83261A.
[17] T. Matsunawa, J.-R. Gao, B. Yu, and D. Z. Pan, “A new lithography
hotspot detection framework based on AdaBoost classifier and simplified
feature extraction,” in Proc. SPIE, vol. 9427, 2015, Art. no. 94270S. Haoyu Yang received the B.E. degree from Qiushi
[18] H. Zhang, B. Yu, and E. F. Y. Young, “Enabling online learning in lithog- Honors College, Tianjin University, Tianjin, China,
raphy hotspot detection with information-theoretic feature optimization,” in 2015. He is currently pursuing the Ph.D. degree
in Proc. IEEE/ACM Int. Conf. Comput.-Aided Design (ICCAD), 2016, with the Department of Computer Science and
pp. 1–8. Engineering, Chinese University of Hong Kong,
[19] H. Yang, L. Luo, J. Su, C. Lin, and B. Yu, “Imbalance aware lithography Hong Kong.
hotspot detection: A deep learning approach,” J. Micro Nanolithography He has interned with ASML, San Jose, CA,
MEMS MOEMS, vol. 16, no. 3, 2017, Art. no. 033504. USA, and Cadence Design Systems, San Jose. He
received the 2019 Nick Cobb Scholarship by SPIE
[20] H. Yang, J. Su, Y. Zou, B. Yu, and E. F. Y. Young, “Layout hotspot
and Mentor Graphics. His current research interests
detection with feature tensor generation and deep biased learning,” in
include machine learning and very large-scale inte-
Proc. ACM/IEEE Design Autom. Conf. (DAC), 2017, pp. 1–6.
gration design and sign-off.
[21] H. Yang, P. Pathak, F. Gennari, Y.-C. Lai, and B. Yu, “Detecting multi-
layer layout hotspots with adaptive squish patterns,” in Proc. IEEE/ACM
Asia South Pac. Design Autom. Conf. (ASPDAC), 2019, pp. 299–304.
[22] T. Matsunawa, B. Yu, and D. Z. Pan, “Optical proximity correction with
hierarchical Bayes model,” J. Micro Nanolithography MEMS MOEMS,
vol. 15, no. 2, 2016, Art. no. 021009.
[23] A. Gu and A. Zakhor, “Optical proximity correction with linear regres- Shuhe Li received the B.Sc. degree from the
sion,” IEEE Trans. Semicond. Manuf., vol. 21, no. 2, pp. 263–271, Chinese University of Hong Kong, Hong Kong, in
May 2008. 2019, where he is currently pursuing the M.Sc.
[24] R. Luo, “Optical proximity correction using a multilayer perceptron degree in computer science.
neural network,” J. Opt., vol. 15, no. 7, 2013, Art. no. 075708.
[25] S. Choi, S. Shim, and Y. Shin, “Machine learning (ML)-guided OPC
using basis functions of polar Fourier transform,” in Proc. SPIE,
vol. 9780, 2016, Art. no. 97800H.
[26] I. Goodfellow et al., “Generative adversarial nets,” in Proc. Conf. Neural
Inf. Process. Syst. (NIPS), 2014, pp. 2672–2680.
Authorized licensed use limited to: Chinese University of Hong Kong. Downloaded on September 24,2020 at 00:45:56 UTC from IEEE Xplore. Restrictions apply.
2834 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 39, NO. 10, OCTOBER 2020
Zihao Deng received the B.Sc. degree (First Class Bei Yu (S’11–M’14) received the Ph.D. degree from
Hons.) in computer science from the Chinese the University of Texas at Austin, Austin, TX, USA,
University of Hong Kong, Hong Kong, in 2019. in 2014.
His current research interests include machine He is currently an Assistant Professor with the
learning algorithms, deep neural networks, and Department of Computer Science and Engineering,
information theory. Chinese University of Hong Kong, Hong Kong.
Dr. Yu was a recipient of the five Best Paper
Awards from Integration, the VLSI Journal in 2018,
the International Symposium on Physical Design in
2017, the SPIE Advanced Lithography Conference
in 2016, the International Conference on Computer
Aided Design in 2013, and the Asia and South Pacific Design Automation
Conference in 2012, and five ICCAD/ISPD Contest Awards. He is the
Editor-in-Chief of the IEEE Technical Committee on Cyber-Physical Systems
Newsletter. He has served as the TPC Chair for ACM/IEEE Workshop on
Machine Learning for CAD, many journal editorial boards, and conference
committees.
Authorized licensed use limited to: Chinese University of Hong Kong. Downloaded on September 24,2020 at 00:45:56 UTC from IEEE Xplore. Restrictions apply.