default search action
Qinghao Ye
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j9]Jiabo Ye, Junfeng Tian, Ming Yan, Haiyang Xu, Qinghao Ye, Yaya Shi, Xiaoshan Yang, Xuwu Wang, Ji Zhang, Liang He, Xin Lin:
UniQRNet: Unifying Referring Expression Grounding and Segmentation with QRNet. ACM Trans. Multim. Comput. Commun. Appl. 20(8): 246:1-246:28 (2024) - [c17]Chaoya Jiang, Wei Ye, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Shikun Zhang:
TiMix: Text-Aware Image Mixing for Effective Vision-Language Pre-training. AAAI 2024: 2489-2497 - [c16]Haowei Liu, Yaya Shi, Haiyang Xu, Chunfeng Yuan, Qinghao Ye, Chenliang Li, Ming Yan, Ji Zhang, Fei Huang, Bing Li, Weiming Hu:
Semantics-enhanced Cross-modal Masked Image Modeling for Vision-Language Pre-training. LREC/COLING 2024: 14664-14675 - [c15]Haowei Liu, Yaya Shi, Haiyang Xu, Chunfeng Yuan, Qinghao Ye, Chenliang Li, Ming Yan, Ji Zhang, Fei Huang, Bing Li, Weiming Hu:
Unifying Latent and Lexicon Representations for Effective Video-Text Retrieval. LREC/COLING 2024: 17031-17041 - [c14]Qinghao Ye, Haiyang Xu, Jiabo Ye, Ming Yan, Anwen Hu, Haowei Liu, Qi Qian, Ji Zhang, Fei Huang:
mPLUG-OwI2: Revolutionizing Multi-modal Large Language Model with Modality Collaboration. CVPR 2024: 13040-13051 - [c13]Chaoya Jiang, Haiyang Xu, Mengfan Dong, Jiaxing Chen, Wei Ye, Ming Yan, Qinghao Ye, Ji Zhang, Fei Huang, Shikun Zhang:
Hallucination Augmented Contrastive Learning for Multimodal Large Language Model. CVPR 2024: 27026-27036 - [c12]Anwen Hu, Yaya Shi, Haiyang Xu, Jiabo Ye, Qinghao Ye, Ming Yan, Chenliang Li, Qi Qian, Ji Zhang, Fei Huang:
mPLUG-PaperOwl: Scientific Diagram Analysis with the Multimodal Large Language Model. ACM Multimedia 2024: 6929-6938 - [i28]Haowei Liu, Yaya Shi, Haiyang Xu, Chunfeng Yuan, Qinghao Ye, Chenliang Li, Ming Yan, Ji Zhang, Fei Huang, Bing Li, Weiming Hu:
Unifying Latent and Lexicon Representations for Effective Video-Text Retrieval. CoRR abs/2402.16769 (2024) - [i27]Haowei Liu, Yaya Shi, Haiyang Xu, Chunfeng Yuan, Qinghao Ye, Chenliang Li, Ming Yan, Ji Zhang, Fei Huang, Bing Li, Weiming Hu:
Semantics-enhanced Cross-modal Masked Image Modeling for Vision-Language Pre-training. CoRR abs/2403.00249 (2024) - [i26]Zhaorun Chen, Yichao Du, Zichen Wen, Yiyang Zhou, Chenhang Cui, Zhenzhen Weng, Haoqin Tu, Chaoqi Wang, Zhengwei Tong, Qinglan Huang, Canyu Chen, Qinghao Ye, Zhihong Zhu, Yuqing Zhang, Jiawei Zhou, Zhuokai Zhao, Rafael Rafailov, Chelsea Finn, Huaxiu Yao:
MJ-Bench: Is Your Multimodal Reward Model Really a Good Judge for Text-to-Image Generation? CoRR abs/2407.04842 (2024) - [i25]Tianyi Xiong, Xiyao Wang, Dong Guo, Qinghao Ye, Haoqi Fan, Quanquan Gu, Heng Huang, Chunyuan Li:
LLaVA-Critic: Learning to Evaluate Multimodal Models. CoRR abs/2410.02712 (2024) - 2023
- [j8]Xi Zhou, Qinghao Ye, Xiaolin Yang, Jiakun Chen, Haiqin Ma, Jun Xia, Javier Del Ser, Guang Yang:
AI-based medical e-diagnosis for fast and automatic ventricular volume measurement in patients with normal pressure hydrocephalus. Neural Comput. Appl. 35(22): 16011-16020 (2023) - [j7]Ping Li, Jiachen Cao, Li Yuan, Qinghao Ye, Xianghua Xu:
Truncated attention-aware proposal networks with multi-scale dilation for temporal action detection. Pattern Recognit. 142: 109684 (2023) - [c11]Xu Yang, Jiawei Peng, Zihua Wang, Haiyang Xu, Qinghao Ye, Chenliang Li, Songfang Huang, Fei Huang, Zhangzikang Li, Yu Zhang:
Transforming Visual Scene Graphs to Image Captions. ACL (1) 2023: 12427-12440 - [c10]Jiabo Ye, Anwen Hu, Haiyang Xu, Qinghao Ye, Ming Yan, Guohai Xu, Chenliang Li, Junfeng Tian, Qi Qian, Ji Zhang, Qin Jin, Liang He, Xin Lin, Fei Huang:
UReader: Universal OCR-free Visually-situated Language Understanding with Multimodal Large Language Model. EMNLP (Findings) 2023: 2841-2858 - [c9]Xu Yang, Zhangzikang Li, Haiyang Xu, Hanwang Zhang, Qinghao Ye, Chenliang Li, Ming Yan, Yu Zhang, Fei Huang, Songfang Huang:
Learning Trajectory-Word Alignments for Video-Language Tasks. ICCV 2023: 2504-2514 - [c8]Chaoya Jiang, Haiyang Xu, Wei Ye, Qinghao Ye, Chenliang Li, Ming Yan, Bin Bi, Shikun Zhang, Fei Huang, Songfang Huang:
BUS : Efficient and Effective Vision-language Pre-training with Bottom-Up Patch Summarization. ICCV 2023: 2888-2898 - [c7]Qinghao Ye, Guohai Xu, Ming Yan, Haiyang Xu, Qi Qian, Ji Zhang, Fei Huang:
HiTeA: Hierarchical Temporal-Aware Video-Language Pre-training. ICCV 2023: 15359-15370 - [c6]Haiyang Xu, Qinghao Ye, Ming Yan, Yaya Shi, Jiabo Ye, Yuanhong Xu, Chenliang Li, Bin Bi, Qi Qian, Wei Wang, Guohai Xu, Ji Zhang, Songfang Huang, Fei Huang, Jingren Zhou:
mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video. ICML 2023: 38728-38748 - [c5]Yaya Shi, Haowei Liu, Haiyang Xu, Zongyang Ma, Qinghao Ye, Anwen Hu, Ming Yan, Ji Zhang, Fei Huang, Chunfeng Yuan, Bing Li, Weiming Hu, Zheng-Jun Zha:
Learning Semantics-Grounded Vocabulary Representation for Video-Text Retrieval. ACM Multimedia 2023: 4460-4470 - [c4]Chaoya Jiang, Haiyang Xu, Wei Ye, Qinghao Ye, Chenliang Li, Ming Yan, Bin Bi, Shikun Zhang, Fei Huang, Ji Zhang:
COPA : Efficient Vision-Language Pre-training through Collaborative Object- and Patch-Text Alignment. ACM Multimedia 2023: 4480-4491 - [c3]Qinghao Ye, Haiyang Xu, Ming Yan, Chenlin Zhao, Junyang Wang, Xiaoshan Yang, Ji Zhang, Fei Huang, Jitao Sang, Changsheng Xu:
mPLUG-Octopus: The Versatile Assistant Empowered by A Modularized End-to-End Multimodal LLM. ACM Multimedia 2023: 9365-9367 - [i24]Xu Yang, Zhangzikang Li, Haiyang Xu, Hanwang Zhang, Qinghao Ye, Chenliang Li, Ming Yan, Yu Zhang, Fei Huang, Songfang Huang:
Learning Trajectory-Word Alignments for Video-Language Tasks. CoRR abs/2301.01953 (2023) - [i23]Haiyang Xu, Qinghao Ye, Ming Yan, Yaya Shi, Jiabo Ye, Yuanhong Xu, Chenliang Li, Bin Bi, Qi Qian, Wei Wang, Guohai Xu, Ji Zhang, Songfang Huang, Fei Huang, Jingren Zhou:
mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video. CoRR abs/2302.00402 (2023) - [i22]Junfeng Tian, Hehong Chen, Guohai Xu, Ming Yan, Xing Gao, Jianhai Zhang, Chenliang Li, Jiayi Liu, Wenshen Xu, Haiyang Xu, Qi Qian, Wei Wang, Qinghao Ye, Jiejing Zhang, Ji Zhang, Fei Huang, Jingren Zhou:
ChatPLUG: Open-Domain Generative Dialogue System with Internet-Augmented Instruction Tuning for Digital Human. CoRR abs/2304.07849 (2023) - [i21]Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, Chenliang Li, Yuanhong Xu, Hehong Chen, Junfeng Tian, Qian Qi, Ji Zhang, Fei Huang:
mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality. CoRR abs/2304.14178 (2023) - [i20]Xu Yang, Jiawei Peng, Zihua Wang, Haiyang Xu, Qinghao Ye, Chenliang Li, Ming Yan, Fei Huang, Zhangzikang Li, Yu Zhang:
Transforming Visual Scene Graphs to Image Captions. CoRR abs/2305.02177 (2023) - [i19]Haiyang Xu, Qinghao Ye, Xuan Wu, Ming Yan, Yuan Miao, Jiabo Ye, Guohai Xu, Anwen Hu, Yaya Shi, Guangwei Xu, Chenliang Li, Qi Qian, Maofei Que, Ji Zhang, Xiao Zeng, Fei Huang:
Youku-mPLUG: A 10 Million Large-scale Chinese Video-Language Dataset for Pre-training and Benchmarks. CoRR abs/2306.04362 (2023) - [i18]Jiabo Ye, Anwen Hu, Haiyang Xu, Qinghao Ye, Ming Yan, Yuhao Dan, Chenlin Zhao, Guohai Xu, Chenliang Li, Junfeng Tian, Qian Qi, Ji Zhang, Fei Huang:
mPLUG-DocOwl: Modularized Multimodal Large Language Model for Document Understanding. CoRR abs/2307.02499 (2023) - [i17]Chaoya Jiang, Haiyang Xu, Wei Ye, Qinghao Ye, Chenliang Li, Ming Yan, Bin Bi, Shikun Zhang, Fei Huang, Songfang Huang:
BUS: Efficient and Effective Vision-language Pre-training with Bottom-Up Patch Summarization. CoRR abs/2307.08504 (2023) - [i16]Chaoya Jiang, Haiyang Xu, Wei Ye, Qinghao Ye, Chenliang Li, Ming Yan, Bin Bi, Shikun Zhang, Ji Zhang, Fei Huang:
COPA: Efficient Vision-Language Pre-training Through Collaborative Object- and Patch-Text Alignment. CoRR abs/2308.03475 (2023) - [i15]Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang:
Evaluation and Analysis of Hallucination in Large Vision-Language Models. CoRR abs/2308.15126 (2023) - [i14]Jiabo Ye, Anwen Hu, Haiyang Xu, Qinghao Ye, Ming Yan, Guohai Xu, Chenliang Li, Junfeng Tian, Qi Qian, Ji Zhang, Qin Jin, Liang He, Xin Alex Lin, Fei Huang:
UReader: Universal OCR-free Visually-situated Language Understanding with Multimodal Large Language Model. CoRR abs/2310.05126 (2023) - [i13]Qinghao Ye, Haiyang Xu, Jiabo Ye, Ming Yan, Anwen Hu, Haowei Liu, Qi Qian, Ji Zhang, Fei Huang, Jingren Zhou:
mPLUG-Owl2: Revolutionizing Multi-modal Large Language Model with Modality Collaboration. CoRR abs/2311.04257 (2023) - [i12]Anwen Hu, Yaya Shi, Haiyang Xu, Jiabo Ye, Qinghao Ye, Ming Yan, Chenliang Li, Qi Qian, Ji Zhang, Fei Huang:
mPLUG-PaperOwl: Scientific Diagram Analysis with the Multimodal Large Language Model. CoRR abs/2311.18248 (2023) - [i11]Chaoya Jiang, Haiyang Xu, Mengfan Dong, Jiaxing Chen, Wei Ye, Ming Yan, Qinghao Ye, Ji Zhang, Fei Huang, Shikun Zhang:
Hallucination Augmented Contrastive Learning for Multimodal Large Language Model. CoRR abs/2312.06968 (2023) - [i10]Chaoya Jiang, Wei Ye, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Shikun Zhang:
TiMix: Text-aware Image Mixing for Effective Vision-Language Pre-training. CoRR abs/2312.08846 (2023) - 2022
- [j6]Yingchao Pan, Ouhan Huang, Qinghao Ye, Zhongjin Li, Wenjiang Wang, Guodun Li, Yuxing Chen:
Exploring Global Diversity and Local Context for Video Summarization. IEEE Access 10: 43611-43622 (2022) - [j5]Qinghao Ye, Yuan Gao, Weiping Ding, Zhangming Niu, Chengjia Wang, Yinghui Jiang, Minhao Wang, Evandro Fei Fang, Wade Menpes-Smith, Jun Xia, Guang Yang:
Robust weakly supervised learning for COVID-19 recognition using multi-center CT images. Appl. Soft Comput. 116: 108291 (2022) - [j4]Guang Yang, Qinghao Ye, Jun Xia:
Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond. Inf. Fusion 77: 29-52 (2022) - [j3]Qi Bi, Beichen Zhou, Kun Qin, Qinghao Ye, Gui-Song Xia:
All Grains, One Scheme (AGOS): Learning Multigrain Instance Representation for Aerial Scene Classification. IEEE Trans. Geosci. Remote. Sens. 60: 1-17 (2022) - [i9]Yingchao Pan, Ouhan Huang, Qinghao Ye, Zhongjin Li, Wenjiang Wang, Guodun Li, Yuxing Chen:
Exploring Global Diversity and Local Context for Video Summarization. CoRR abs/2201.11345 (2022) - [i8]Xi Zhou, Qinghao Ye, Xiaolin Yang, Jiakuan Chen, Haiqin Ma, Jun Xia, Javier Del Ser, Guang Yang:
AI-based Medical e-Diagnosis for Fast and Automatic Ventricular Volume Measurement in the Patients with Normal Pressure Hydrocephalus. CoRR abs/2202.00650 (2022) - [i7]Qi Bi, Beichen Zhou, Kun Qin, Qinghao Ye, Gui-Song Xia:
All Grains, One Scheme (AGOS): Learning Multi-grain Instance Representation for Aerial Scene Classification. CoRR abs/2205.03371 (2022) - [i6]Qinghao Ye, Guohai Xu, Ming Yan, Haiyang Xu, Qi Qian, Ji Zhang, Fei Huang:
HiTeA: Hierarchical Temporal-Aware Video-Language Pre-training. CoRR abs/2212.14546 (2022) - 2021
- [j2]Ping Li, Qinghao Ye, Luming Zhang, Li Yuan, Xianghua Xu, Ling Shao:
Exploring global diverse attention via pairwise temporal relation for video summarization. Pattern Recognit. 111: 107677 (2021) - [c2]Qinghao Ye, Jun Xia, Guang Yang:
Explainable AI for COVID-19 CT Classifiers: An Initial Comparison Study. CBMS 2021: 521-526 - [c1]Qinghao Ye, Xiyue Shen, Yuan Gao, Zirui Wang, Qi Bi, Ping Li, Guang Yang:
Temporal Cue Guided Video Highlight Detection with Low-Rank Audio-Visual Fusion. ICCV 2021: 7930-7939 - [i5]Guang Yang, Qinghao Ye, Jun Xia:
Unbox the Black-box for the Medical Explainable AI via Multi-modal and Multi-centre Data Fusion: A Mini-Review, Two Showcases and Beyond. CoRR abs/2102.01998 (2021) - [i4]Qinghao Ye, Jun Xia, Guang Yang:
Explainable AI For COVID-19 CT Classifiers: An Initial Comparison Study. CoRR abs/2104.14506 (2021) - [i3]Qinghao Ye, Yuan Gao, Weiping Ding, Zhangming Niu, Chengjia Wang, Yinghui Jiang, Minhao Wang, Evandro Fei Fang, Wade Menpes-Smith, Jun Xia, Guang Yang:
Robust Weakly Supervised Learning for COVID-19 Recognition Using Multi-Center CT Images. CoRR abs/2112.04984 (2021) - 2020
- [i2]Ping Li, Qinghao Ye, Luming Zhang, Li Yuan, Xianghua Xu, Ling Shao:
Exploring global diverse attention via pairwise temporal relation for video summarization. CoRR abs/2009.10942 (2020)
2010 – 2019
- 2019
- [j1]Qinghao Ye, Daijian Tu, Feiwei Qin, Zizhao Wu, Yong Peng, Shuying Shen:
Dual attention based fine-grained leukocyte recognition for imbalanced microscopic images. J. Intell. Fuzzy Syst. 37(5): 6971-6982 (2019) - [i1]Qinghao Ye, Kaiyuan Hu, Yizhe Wang:
Application of Time Series Analysis to Traffic Accidents in Los Angeles. CoRR abs/1911.12813 (2019)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-08 20:30 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint