As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
Efficient semantic segmentation is essential for a wide array of computer vision applications, and knowledge distillation has emerged as a promising methodology for model compression and efficiency. However, we observed that an excess of positive pixels can dilute attention weights, hindering the student model’s learning process. To tackle this significant challenge, we introduce the Local and Global Attention Distillation (LGAD) framework, a pioneering block-based technique that distills both local and global attention. The LGAD framework segments feature maps and output probabilities into well-defined local and global blocks, effectively mitigating the dilution of attention weights. By doing so, it enhances the distinction between positive and negative pixels, particularly amplifying the focus on salient regions within each local and global block. We have conducted comprehensive experiments on three benchmark datasets, Cityscapes, CamVid, and Pascal VOC 2012. The experiment results demonstrate the effectiveness of our proposed LGAD and confirm its superiority over several state-of-the-art distillation methods for semantic segmentation.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.