Selective Focus: Investigating Semantics Sensitivity in Post-training Quantization for Lane Detection

Authors

  • Yunqian Fan School of Information Science and Technology, ShanghaiTech University SenseTime Research
  • Xiuying Wei SenseTime Research
  • Ruihao Gong State Key Laboratory of Complex & Critical Software Environment, Beihang University, Beijing, China SenseTime Research
  • Yuqing Ma Institute of Artificial Intelligence, Beihang University, Beijing, China State Key Laboratory of Complex & Critical Software Environment, Beihang University, Beijing, China
  • Xiangguo Zhang SenseTime Research
  • Qi Zhang SenseTime Research
  • Xianglong Liu State Key Laboratory of Complex & Critical Software Environment, Beihang University, Beijing, China

DOI:

https://doi.org/10.1609/aaai.v38i11.29080

Keywords:

ML: Learning on the Edge & Model Compression, CV: Applications

Abstract

Lane detection (LD) plays a crucial role in enhancing the L2+ capabilities of autonomous driving, capturing widespread attention. The Post-Processing Quantization (PTQ) could facilitate the practical application of LD models, enabling fast speeds and limited memories without labeled data. However, prior PTQ methods do not consider the complex LD outputs that contain physical semantics, such as offsets, locations, etc., and thus cannot be directly applied to LD models. In this paper, we pioneeringly investigate semantic sensitivity to post-processing for lane detection with a novel Lane Distortion Score. Moreover, we identify two main factors impacting the LD performance after quantization, namely intra-head sensitivity and inter-head sensitivity, where a small quantization error in specific semantics can cause significant lane distortion. Thus, we propose a Selective Focus framework deployed with Semantic Guided Focus and Sensitivity Aware Selection modules, to incorporate post-processing information into PTQ reconstruction. Based on the observed intra-head sensitivity, Semantic Guided Focus is introduced to prioritize foreground-related semantics using a practical proxy. For inter-head sensitivity, we present Sensitivity Aware Selection, efficiently recognizing influential prediction heads and refining the optimization objectives at runtime. Extensive experiments have been done on a wide variety of models including keypoint-, anchor-, curve-, and segmentation-based ones. Our method produces quantized models in minutes on a single GPU and can achieve 6.4\% F1 Score improvement on the CULane dataset. Code and supplementary statement can be found at https://github.com/PannenetsF/SelectiveFocus.

Published

2024-03-24

How to Cite

Fan, Y., Wei, X., Gong, R., Ma, Y., Zhang, X., Zhang, Q., & Liu, X. (2024). Selective Focus: Investigating Semantics Sensitivity in Post-training Quantization for Lane Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 38(11), 11936-11943. https://doi.org/10.1609/aaai.v38i11.29080

Issue

Section

AAAI Technical Track on Machine Learning II