Liu 2021
Liu 2021
fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2021.3100069, IEEE Access
Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000.
Digital Object Identifier 10.1109/JOURNAL.2017.Doi Number
ABSTRACT Image super-resolution reconstruction uses a specific algorithm to restore the low resolution
blurred image in the same scene to a high resolution image. In recent years, with the vigorous development
of deep learning, this technology has been widely used in many fields. In the field of image super-resolution
reconstruction, more and more methods based on deep learning have been studied. According to the
principle of GAN, a pseudo high-resolution image is generated by the generator, and then the discriminator
calculates the difference between the image and the real high-resolution image to measure the authenticity
of the image. Based on SRGAN (super resolution general adverse network), this paper mainly makes three
improvements. First, it introduces the attention channel mechanism, that is, it adds Ca (channel attention)
module to SRGAN network, and increases the network depth to better express high frequency features;
Second, delete the original BN (batch normalization) layer to improve the network performance; Third,
modify the loss function to reduce the impact of noise on the image. The experimental results show that the
proposed method is superior to the current methods in both quantitative and qualitative indicators, and
promotes the recovery of high-frequency detail information. The experimental results show that the
proposed method improves the artifact problem and improves the PSNR (peak signal-to-noise ratio) on set5,
set10 and bsd100 test sets.
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2021.3100069, IEEE Access
Baozhong Liu: A super resolution algorithm based on attention mechanism and SRGAN network
enhancement unit and a compression unit [14]. This method mechanism module, and increases the network depth
can maintain good reconstruction accuracy and achieve to better express the high frequency [27] features.
real-time performance. Shen Mingyu et al. (2019) obtained (2) As for the loss function, the square term of MSE
the structural feature information of low resolution image (mean square error) increases the influence of large
through coding network, and obtained high-frequency error on image super-resolution, so L1 loss function
features through multi-path feedforward network composed is used [28].
of stage feature fusion unit [15]. Yang Juan et al. (2019) (3) The experimental results show that the artifacts
completed the reconstruction task of the generator module produced by SRGAN super-resolution are reduced to
by progressively extracting high-frequency features of a certain extent, and the stability of the model is
different scales of the image [16]. Li Langyu et al. (2018) enhanced, and the experimental results are good.
constructed a detail supplement network to supplement the II. SUPER RESOLUTION RECONSTRUCTION MODEL
image features by combining the local similarity of the
image, and used a convolution layer to fuse the features
obtained by the detail supplement network with the features
extracted by the feature extraction network, so as to
reconstruct the high-resolution image [17]. For industrial
application, Ahn et al. (2018) proposed a new method, carn FIGURE 1 Network Framework of SRGAN
(cascading residual network), which uses local and global SRGAN is a kind of super-resolution network which can
cascading mechanism to integrate multi-layer features, so generate adversary network. The effect of SRGAN is to
as to speed up the running speed of the model [18]. Fang et improve the super-resolution amplification factor. The
al. (2020) integrated the prior knowledge of soft edge research shows that the super-resolution effect is better when
image into the model to assist image reconstruction, so as to the magnification is ×4 or×8. Because the image obtained
avoid blindly increasing the network depth [19]. To solve from the traditional super-resolution network reconstruction
the problem that interpolation processing, especially when or linear network will produce smoother image quality, the
the super-resolution factor is very high, may cause the restoration degree of relevant details is not good. However,
image to become smoother, Yang et al. (2019) proposed a SRGAN is equivalent to the process of two simulated models
deep recursive fusion network (DRFN), which uses fighting each other. G network generates pseudo high-
transposed convolution instead of bicubic interpolation for resolution images according to the training set, and D
up sampling, and integrates different levels of features network judges how much error there is between the images
extracted from recursive residual block storage, Thus the generated by G network and the real images according to the
final high resolution image is formed [20]. In view of the real high-resolution images, and continuously carries out the
fact that the existing networks seldom mine the correlation process of cyclic generation and verification. The antagonism
of features between layers, the learning ability of formula of G and D is shown in (1).
convolutional neural networks is reduced min G max DV ( D, G ) =
In order to avoid the over fitting of low-frequency texture
Ex − P [log D ( x )] + E z − P ( Z ) [log(1 − D (G ( Z )))] (1)
under high complex network training, the feature of data
( x) z
different frequency is processed hierarchically. Super Where x is the real sample; D (x) is the probability that x is
resolution feedback network (SRFBN), which uses high- judged to be a real picture after passing through the
level information to refine low-level representation, uses discriminator; Z is the input noise, and G (z) is the output
hidden state in a constrained recurrent neural network to sample of the generated network.
achieve this feedback [21]. In order to use a model to solve A. SRGAN NETWORK MODEL
the super-resolution of arbitrary scale factors (including non The network of SRGAN is composed of generating
integer scale factors), Hu et al(2020) the scale factor is used network and discriminating network. The low resolution
as the input to dynamically predict the weights of the up image is amplified by generating network to reconstruct the
sampling filter, and these weights are used to generate high- high resolution image. The discriminating network judges
resolution images of any size [22-24]. Zhang et al. (2020) the low resolution image and the high resolution image, and
proposed a DPSR (deep plug and play super-resolution) outputs the discriminating probability to judge whether it is
method [25]. A new single image super-resolution true or false. The loss function in the network is added with
degradation model was designed to replace the blind the identification loss to guide the network to generate
deblurring kernel estimation, and the energy function was high-quality pseudo high-resolution images.
introduced to optimize the degradation model. SRGAN generation network is mainly composed of sub-
For this paper, the main contributions are as follows: pixel layer and residual block, but the effect image is not
(1) Based on SRGAN network, this algorithm changes good in high-frequency features [29]. Therefore, attention
the structure of the network, introduces rcan [26] mechanism is added in reading the paper, especially in the
(residual channel attention networks) attention processing of high-frequency features.
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2021.3100069, IEEE Access
Baozhong Liu: A super resolution algorithm based on attention mechanism and SRGAN network
1)GENERATING NETWORK
The network model generated in this paper is shown in
Figure 4, and its structure includes six residual modules and
long jump connections. At the same time, each residual
block contains 10 short jump links and residual channel
attention blocks. The structure of residual module including
residual module can make the training of super-resolution
algorithm deeper. Moreover, residual attention block is
FIGURE.5 SRCAGAN Identification Network Model Diagrams
introduced to recover high frequency image information
B.ATTENTION MECHANISM
better. In 2.2, we will introduce in detail the composition of The attention mechanism module focusing on high
the attention block of the residual channel in the generation frequency is called Ca, which is obtained by referring to
network. rcab (residual channel attention block) and CBAM [31]
(convolutional block attention module). It is composed of
an average pooling layer, two convolution layers, and a relu
activation function, while the other module is composed of
a maximum pooling layer, a convolution layer and a
convolution layer It consists of two convolution layers and
a relu activation function. The outputs of the two modules
are added to form a residual block. The output is obtained
by multiplying the input X by the output y processed by the
convolution layer. Through this method, we can make the
FIGURE.4 SRCAGAN Generates Network Model Diagrams important channel, namely high-frequency feature weight
2)AUTHENTICATION NETWORK larger, and reduce the channel weight of the part with small
In this paper, the training discriminant network is a linear improvement of image quality. The specific structure is
network structure. Referring to ledig et al. [30], the shown in Figure2.
structure of the discriminant network in SRGAN model is
proposed. The specific structure is shown in Figure 5. The
first layer of the model is a 2D convolution and a leaky relu
activation function, followed by seven modules composed
of convolution, BN layer and leaky relu activation function.
The last part consists of an average pooling layer, two
convolution layers and a leaky relu function. In order to
simplify the calculation process and avoid introducing the
maximum pooling layer into the network structure, leaky
relu is used as the activation function. With the deepening
of network model layers, the ability of feature extraction is
enhanced, the number of image features is increasing, and
the convolution size of each extracted feature is decreasing.
The effect of leaky relu activation function is better than
that of relu function in negative value, which can avoid
sparse gradient. Therefore, leaky relu is selected as the
activation function. Finally, the parameters of the model are
passed into the two full connection layers and the last
FIGURE.2 ResCA Attention Mechanism Module Diagram
SIGMOD activation function to obtain the similarity
The general CA includes the following three parts:
probability between the false image generated by the
generated network and the real image. After continuous 1. Squeeze is Ca (channel attention) is to compress the
confrontation with the generated network, until the value image features into a point, SA (spatial attention) is to
obtained by the identification network tends to be stable. In compress n features into a map.
order to avoid the gradient disappearing, after leaky relu, 2. In order to get the connection between channels, we
the network uses batch normalization to enhance the can get the vector of C (CA); Or through the convolution
generalization ability. layer, we can get the spatial region relation to get H × W ×
1 (SA).
3. Scaling, the same as sigmoid, converts the attention
map into a graph between 0 and 1, and then performs point
multiplication with the input.
Inspired by EDSR residual module [23], because the long
jump connection between residual block and residual block
can make the main part of the network pay more attention
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2021.3100069, IEEE Access
Baozhong Liu: A super resolution algorithm based on attention mechanism and SRGAN network
to the information of low resolution features, this paper absolute value of error. According to wGAN network, the
connects several Resca modules to form rresca module. The L1 loss function is designed as formulas (2) and (3).
model is shown in Figure 3. among ( I H ) x, y In order to obtain the high resolution
image based on the loss function, (GG ( I L )) x , y It is the
reconstructed image.
In addition to the relevant experiments of L1 loss
function, a series of comparative experiments of MSE loss
function are also done. MSE loss function is often used in
the reconstruction model of super-resolution network,
which directly optimizes the square difference between
FIGURE.3 Module of RResCA each pixel of high resolution and low resolution images.
It has been proved in reference [32] that the neural The PSNR (peak signal to noise ratio) of the generated
network constructed by superimposed residual blocks and image is high.
long jumps performs better. Therefore, the residual group Although the convergence of MSE loss function is good,
consists of several rresca modules. But because the network the weight of MSE loss function to outliers is usually large,
is too deep will cause training difficulties, so the total which makes the term with larger error have more influence
residual rresca module number is set as 5 and 10, through than the term with smaller error. At the same time, MSE is
the comparison of the two to get the best training effect. the method of using the original measurement, which is
At the same time, remove all BN layers of the generated also the reason why the intuitive feeling of human eyes is
network in SRGAN, because removing BN layer has been not friendly and often causes artifacts(4) Is the formula of
proved to significantly enhance the performance of the MSE loss function.
network in different PSNR oriented tasks, and reduce the rW rH
1
computational complexity of network training, including SR
lMSE = 2
r WH
(I
x =1 y =1
HR
x, y − GG ( I LR ) x , y )2 (4)
super-resolution and deblurring [33]. The function of BN
layer is to normalize all the features of images with the In the process of super-resolution reconstruction, as long as
method of mean and variance processing in batch during there is a little noise in the image, it will have a great
the model training period. During the test period, the impact on the reconstruction effect, because many
estimated mean and variance obtained from the training algorithms will amplify the noise in super-resolution
model are substituted into the model for testing. However, reconstruction. At this time, in order to maintain the smooth
when there is a big difference between the training data set integrity of the image, we need to add a regular term to the
and the test data set, the BN layer often produces optimization model. Tvloss is a regular term usually used.
uncomfortable artifacts [34]. This is because the BN layer The difference between adjacent pixels in the image can be
weakens the generalization ability of the model to a certain solved by using tvloss. The formulas are as follows (5):
extent. It is observed by experience that the BN layer may
produce artifacts when the network is deep or under the ltv = ux2 + u y2 dxdy (5)
D
training of GAN network. These artifacts occasionally
appear between iterations and different settings, lacking The specific formula against loss is as follows (6) :
N
= − log D D (G D ( I LR ))
stability. Therefore, the BN layer is removed for the SR
lGen (6)
stability and consistency of training. In addition, the n =1
removal of BN layer helps to improve the generalization
ability, reduce the computational complexity of the model III.EXPERIMENT AND ANALYSIS
and the memory usage of the server. A. EXPERIMENTAL ENVIRONMENT AND ITS DATA
C.LOSS FUNCTION SET
The loss function of SRGAN is shown in formula (2): The GPU used in this experiment is NVIDIA geforce GTX
l SR = lx + 2 *10−9 ltv + 10−3 lGen
SR
(2) Titan x, the programming language is python, IDE is pychar
2017, and the deep learning framework is python.
Among them: l x The loss function is called minimum Voc2012 data set is used in training, and the magnification
absolute deviation: is X4. In the test, the bicubic algorithm [35], SRCNN
1 rW rH algorithm [36], SRResNet algorithm [37] and SRGAN
lx = ll1 = 2 | ( I H ) x, y − (GG ( I L )) x, y | (3)
r WH x =1 y =1 algorithm are compared horizontally by using set5, set14,
urban100 and bsd100 data sets. This experiment is based on
L1 loss function is primitive in error calculation method,
it will not have too much penalty for large error term, and SRGAN to change the model, so focus on comparing the
the error between each pixel relative to the real image pixel super-resolution PSNR value, SSIM value and image visual
is proportional to the change of L1 loss function and the effect with SRGAN algorithm.
B. EVALUATION CRITERION
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2021.3100069, IEEE Access
Baozhong Liu: A super resolution algorithm based on attention mechanism and SRGAN network
In this experiment, peak signal-to-noise ratio (PSNR) and the representative super-resolution models in different
structural similarity (SSIM) were used as two criteria to periods are compared horizontally.
evaluate images. 1)COMPARISON OF MODELS AND ALGORITHMS
1)PSNR In this paper, CA-10 (10 layer attention mechanism network)
Peak signal-to-noise ratio (PSNR) is one of the most widely is used to compare the super-resolution results of each model
used standards to evaluate image quality. The method of when the loss function is MSE and L1. The amplification
mean square error is usually used to judge the image quality. coefficient K is 4.
For monochrome w × The original HD image of H and the TABLE I COMPARISON OF 5 LEVELS OF ATTENTION MODULE
WITH 10 LEVELS OF ATTENTION MODULE(K=4)
super-resolution image are obtained by MSE formula:
SRCAGAN-5 SRCAGAN-10
1 W −1 H −1
M MSE = [ X (i, j ) − Y (i, j )]
MH i =0 j =0
(7) Set5 L1 loss MSE loss L1 loss MSE loss
PSNR 29.99 29.61 30.57 29.89
For the three color high-definition original image and SSIM 0.8657 0.8531 0.8730 0.8612
super-resolution image, each pixel has three channels, so the Set14 L1 loss MSE loss L1 loss MSE loss
formula is: PSNR 26.39 26.25 26.58 26.29
1 W −1 H −1
M MSE = [ X (i, j )R,G, B − Y (i, j )R,G,B ]
3WH i =0 j =0
(8) SSIM 0.7426 0.7286 0.7666 0.7328
In the process of experiment, we try 5, 10, 15 residual
So the formula of PSNR is as follows: blocks of attention mechanism. When we do 15 residual
X2 X MAX (9) blocks, because of too many parameters, it is easy to cause
P = 10 log( MAX ) = 20 log(
PSNR )
M MSE M MSE memory overload during training, so only 5 and 10 residual
Generally, the higher the PSNR, the better the effect of blocks are selected for comparison.
super-resolution image reconstruction. However, since the It can be seen from the table that the performance of PSNR
appearance of GAN network, human visual perception has and SSIM of srcaGAN-10 on set5 and set14 is better than
been introduced into super-resolution. Although the peak that of srcaGAN-5. Therefore, srcaGAN-10 is selected as the
signal-to-noise ratio of GAN network is not as good as the comparison model in the subsequent comparison with other
previous linear network, the visual perception effect of GAN algorithm models.
network is much better than other linear networks. 2)OBJECTIVE EVALUATION RESULTS
2)SSIM In this section, we will compare the differences between each
Structural similarity is another widely used measurement algorithm model and our algorithm in three datasets: set5,
index in image super-resolution reconstruction, which set14 and bsd100.
reflects the real feeling of human visual effect. The SSIM Set5 and set14 datasets are low complexity single images,
formula is based on the mutual measurement of three indexes which are based on non negative domain embedding for
between sample X and Y: luminance, contrast and structure. super-resolution research. That is to say, based on the low
The formula is as follows: resolution image, the high resolution image can be
2 x y + c1 reconstructed by deep learning algorithm. The data set was
l ( x, y ) = (10) published in 2012 by Bell Laboratories, France, and is widely
x2 + y2 + c1 used in super-resolution research.
2 x y + c2 Bsd100 data set is a data set for image segmentation and
c ( x, y ) = (11) edge detection. The content includes the image segmentation
x2 + y2 + c2
of 1000 Corel datasets marked by 30 people by hand. 50% of
x y + c3 the segmentation is to present the main color image, and the
s ( x, y ) = (12)
x + y + c3 other 50% is obtained through the main gray image. The data
set was proposed by UC Berkeley in 2001.
SSIM ( x, y ) = [l ( x, y ) a c( x, y ) s ( x, y ) ] (13) TABLE II COMPARISON OF PSNR AND SSIM ON SET5 AND SET14(K=4)
Commonly c3 = c2 / 2 , among Is the average of x, x
Set5 Bicubic SRCNN SRResNet SRGAN Ours
y is the average value of Y. x2 Is the variance of X, y2 is PSNR 28.51 30.12 32.07 29.39 30.20
SSIM 0.8209 0.8630 0.9020 0.8469 0.8726
the variance of Y, xy Is the covariance of X and y. In practical
Set14 Bicubic SRCNN SRResNet SRGAN Ours
application, it is usually set as follows = = =1 . PSNR 25.98 27.20 28.50 26.01 26.61
C. ANALYSIS OF EXPERIMENTAL RESULTS SSIM 0.7485 0.7860 0.8185 0.7380 0.7668
In this experiment, objective evaluation results and subjective TABLE III COMPARISON OF PSNR AND SSIM ON BSD100(K=4)
evaluation results are used to show the super-resolution
BSD100 Bicubic SRCNN SRResN SRGAN Ours
ability of the model. Not only the network depth and loss
et
function used in this model are compared vertically, but also
PSNR 25.95 26.70 27.60 25.18 26.61
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2021.3100069, IEEE Access
Baozhong Liu: A super resolution algorithm based on attention mechanism and SRGAN network
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2021.3100069, IEEE Access
Baozhong Liu: A super resolution algorithm based on attention mechanism and SRGAN network
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2021.3100069, IEEE Access
Baozhong Liu: A super resolution algorithm based on attention mechanism and SRGAN network
[10] Yuantao Chen, Volachith Phonevilay, Jiajun Tao, Xi Chen, Runlong [26] Chen Lijuan, Qian Wei. Face image fuzzy texture recognition method
Xia, Qian Zhang, Kai Yang, Jie Xiong, Jingbo Xie. The Face Image Super- based on super resolution laser imaging [J]. Laser magazine, 2020,41 (08):
Resolution Algorithm Based on Combined Representation Learning. 96-100
Multimedia Tools and Applications, 2020, https://doi.org/10.1007/s11042- [27] Wang Zhi, Li Anyi, Fang Jinxiong. Super resolution reconstruction of
020-09969-1. remote sensing image based on dense convolution neural network [J].
[11] Huang shuaikun, Chen Honggang, Qing LiNbO, Hao Chuanming. Super Surveying and mapping and spatial geographic information, 2020,43 (08): 4-
resolution reconstruction of raw core image [J]. Information technology and 8
network security, 2020,39 (10): 1-6. [28] Hu longlong. Image super-resolution reconstruction based on improved
[12] Multimedia; Reports Summarize Multimedia Findings from Shandong neighborhood embedding and directed kernel regression [J]. Digital
University of Science and Technology (Chest X-ray Images Super-resolution technology and application, 2020,38 (08): 126-127 + 131
Reconstruction Via Recursive Neural Network)[J]. Network Weekly [29] Xu Zhigang, Yan JUANJUAN, Zhu Honglei. Mural image super-
News,2020. resolution reconstruction algorithm based on multi-scale residual attention
[13] Yuantao Chen, Jiajun Tao, Linwu Liu, Jie Xiong, Runlong Xia, Jingbo network [J]. Progress in laser and optoelectronics, 2020,57 (16): 152-159
Xie, Qian Zhang, Kai Yang. Research of improving semantic image [30] Liu Xingchen, Jia Juncheng, Zhang Li, Hu Qinhan. Image super-
segmentation based on a feature fusion model. Journal of Ambient resolution feature concentration network [J / OL]. Computer engineering and
Intelligence and Humanized Computing, 2020, application: 1-9 [2021-06-07]
https://doi.org/10.1007/s12652-020-02066-z. http://kns.cnki.net/kcms/detail/11.2127.tp.20200820.1009.014.html.
[14] Lin Renpu, Zhang Li, Ma Chenhui, Liu Xuan, Zhang Hao. Infrared [31] Li Lan, Zhang Yun, Du Jia, Ma Shaobin. Research on super resolution
image super resolution reconstruction based on improved depth back image reconstruction method based on improved residual subpixel
projection network [J]. Infrared technology, 2020,42 (09): 873-879 convolution neural network [J]. Journal of Changchun Normal University,
[15] Yang Yong, Wu Zheng, Zhang Dongyang, Liu Jiaxiang. Super 2020,39 (08): 23-29.
resolution reconstruction algorithm based on progressive feature enhanced [32] ZHAO Jian-wen,YUAN Qi-ping,QIN Juan,YANG Xiao-ping,CHEN
network [J]. Signal processing, 2020,36 (09): 1598-1606 Zhi-hong. Single image super-resolution reconstruction usingmultiple
[16] Wang Haitao, Yu Wenjie, Zhang Guanglei. Mine image super- dictionaries and improved iterative back-projection[J]. Optoelectronics
resolution reconstruction method based on online multi dictionary learning Letters,2019,15(2).
[J]. Automation of industry and mining, 2020,46 (09): 74-78 [33] Computers; Studies from Huaqiao University Provide New Data on
[17] Wang Yan, Li Ang, Wang Shengquan. Study on the influence of Computers (Super-resolution reconstruction of knee magnetic resonance
training set on automatic target recognition under super resolution of remote imaging based on deep learning)[J]. Network Weekly News,2019.
sensing image [J]. Journal of Chongqing University of Technology [34] Atmosphere Research; Researchers from Chengdu University of
(NATURAL SCIENCE), 2021,35 (02): 136-143 Information Technology Provide Details of New Studies and Findings in the
[18] Yuantao Chen, Linwu Liu, Jiajun Tao, Runlong Xia, Qian Zhang, Kai Area of Atmosphere Research (Generative Adversarial Networks
Yang, Jie Xiong, Xi Chen. The improved image inpainting algorithm via Capabilities for Super-Resolution Reconstruction of Weather Radar
encoder and similarity constraint. The Visual Computer, 2020, Echo .)[J]. Science Letter,2019.
https://doi.org/10.1007/s00371-020-01932-3. [35] Atmosphere Research; Researchers from Chengdu University of
[19] Liu Ying, Zhu Li, Lin qingfan. Image super-resolution reconstruction Information Technology Provide Details of New Studies and Findings in the
based on spatial transformation network [J]. Journal of Xi'an University of Area of Atmosphere Research (Generative Adversarial Networks
Posts and telecommunications, 2020,25 (05): 45-49 Capabilities for Super-Resolution Reconstruction of Weather Radar
[20] Xie Haiping, Xie Kaili, Yang Haitao. Research progress of image super- Echo .)[J]. Science Letter,2019.
resolution method [J]. Computer engineering and application, 2020,56 (19): [36] Sui Yao,Afacan Onur,Gholipour Ali,Warfield Simon K. Isotropic MRI
34-41 Super-Resolution Reconstruction with Multi-scale Gradient Field Prior.[J].
[21] Liu Xingguo. Infrared image super-resolution reconstruction based on Medical image computing and computer-assisted intervention : MICCAI ...
compressed sensing [D]. University of Electronic Science and technology, International Conference on Medical Image Computing and Computer-
2020 Assisted Intervention,2019,11766.
[22] Lu Feng, Zhou Lin, Cai Xiaohui. Research on low resolution face [37] Yunpeng Chang,Bin Luo. Bidirectional Convolutional LSTM Neural
recognition algorithm for security monitoring scenes [J]. Computer Network for Remote Sensing Image Super-Resolution[J]. Remote
application research, 2021,38 (04): 1230-1234 Sensing,2019,11(20).
[23] Shamsolmoali, Pourya, et al. "Image synthesis with adversarial networks: [38] Deng Xin,Dragotti Pier Luigi. Deep Coupled ISTA Network for Multi-
A comprehensive survey and case studies." Information Fusion (2021). modal Image Super-Resolution.[J]. IEEE transactions on image processing :
[24] Yuan Jian, Wang Shanshan, Luo Yingwei. Crowd counting model in a publication of the IEEE Signal Processing Society,2019.
public places based on image field division [J]. Computer application [39] Meng Nan,So Hayden Kwok-Hay,Sun Xing,Lam Edmund. High-
research, 2021,38 (04): 1256-1260 + 1280 dimensional Dense Residual Convolutional Neural Network for Light Field
[25] Xie Chao, Zhu Hongyu. Image super-resolution reconstruction method Reconstruction.[J]. IEEE transactions on pattern analysis and machine
based on deep convolution neural network [J]. Sensors and Microsystems, intelligence,2019.
2020,39 (09): 142-145
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2021.3100069, IEEE Access
Baozhong Liu: A super resolution algorithm based on attention mechanism and SRGAN network
[40] Lijuan Xu,Xuemiao Xu. Regular Super-resolution Restoration [44] Tao Yu,Douté Sylvain,Muller JanPeter,Conway Susan J.,Thomas
Algorithm Based on Fuzzy Similarity Fusion for a Single Haze Image over Nicolas,Cremonese Gabriele. Ultra-High-Resolution 1 m/pixel CaSSIS DTM
the Ocean[J]. Journal of Coastal Research,2019,93(sp1). Using Super-Resolution Restoration and Shape-from-Shading:
[41] Yifan Yang,Ming Zhu,Yuqing Wang,Hang Yang,Yanfeng Wu,Bei Li. Demonstration over Oxia Planum on Mars[J]. Remote Sensing,2021,13(11).
Super-Resolution Reconstruction of Cell Pseudo-Color Image Based on [45] Jia Rongzhao,Wang Xiaohong. Research on Super-Resolution
Raman Technology[J]. Sensors,2019,19(19). Reconstruction Algorithm of Image Based on Generative Adversarial
[42] Sui Yao,Afacan Onur,Gholipour Ali,Warfield Simon K.. Fast and High- Network[J]. Journal of Physics: Conference Series,2021,1944(1).
Resolution Neonatal Brain MRI Through Super-Resolution Reconstruction [46] Wan Lixuepiao,Zhang Gongping. Super-resolution reconstruction of
From Acquisitions With Variable Slice Selection Direction [J]. Frontiers in unmanned aerial vehicle image based on deep learning[J]. Journal of Physics:
Neuroscience,2021. Conference Series,2021,1948(1).
[43] Tao Yu,Muller JanPeter. Super-Resolution Restoration of Spaceborne
Ultra-High-Resolution Images Using the UCL OpTiGAN System[J]. Remote
Sensing,2021,13(12).
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/