As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
Deeper networks have limited improvements for image super-resolution (SR), and are much more difficult to train. The main reason is that these networks consist of many stacked building blocks which can produce many redundant features. Besides, most of SR methods neglect the fact that different features contain various types of information with varying degrees of contributions to image reconstruction, and thus lack sufficient representational capability. Taking these issues into account, we propose a mid-weight bypass connection attention network (BCAN) with more powerful representational capability but fewer parameters. In detail, we design a novel bypass connection attention module (BCAM), which consists of several bypass connection attention blocks (BCABs), enhancing high contribution information and suppressing redundant information. Further, we embed a mixed residual attention unit (MRAU) in each BCAB, which is composed of a channel attention unit and a spatial attention unit. After obtaining all hierarchical features, we propose an adaptive feature fusion module (AFFM), which can effectively combine hierarchical features based on different contributions of each BCAM. Experiments on benchmark datasets with various degradation models show that our BCAN can achieve better performance than existing state-of-the-art methods.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.