default search action
Zhiquan Lai
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j11]Dongsheng Li, Shengwei Li, Zhiquan Lai, Yongquan Fu, Xiangyu Ye, Lei Cai, Linbo Qiao:
A Memory-Efficient Hybrid Parallel Framework for Deep Neural Network Training. IEEE Trans. Parallel Distributed Syst. 35(4): 577-591 (2024) - [j10]Shengwei Li, Kai Lu, Zhiquan Lai, Weijie Liu, Keshi Ge, Dong Sheng Li:
A Multidimensional Communication Scheduling Method for Hybrid Parallel DNN Training. IEEE Trans. Parallel Distributed Syst. 35(8): 1415-1428 (2024) - [c29]Ning Liu, Songlei Jian, Dongsheng Li, Yiming Zhang, Zhiquan Lai, Hongzuo Xu:
Hierarchical Adaptive Pooling by Capturing High-order Dependency for Graph Representation Learning (Extended Abstract). ICDE 2024: 5683-5684 - [c28]Chongshan Liang, Yi Dai, Jun Xia, Jinbo Xu, Jintao Peng, Weixia Xu, Ming Xie, Jie Liu, Zhiquan Lai, Sheng Ma, Qi Zhu:
The Self-adaptive and Topology-aware MPI_Bcast leveraging Collective offload on Tianhe Express Interconnect. IPDPS 2024: 791-801 - [i8]Ao Shen, Qiang Wang, Zhiquan Lai, Xionglve Li, Dong-sheng Li:
Accurate and Efficient Fine-Tuning of Quantized Large Language Models Through Optimal Balance. CoRR abs/2407.17029 (2024) - 2023
- [j9]Keshi Ge, Kai Lu, Yongquan Fu, Xiaoge Deng, Zhiquan Lai, Dongsheng Li:
Compressed Collective Sparse-Sketch for Distributed Data-Parallel Training of Deep Learning Models. IEEE J. Sel. Areas Commun. 41(4): 941-963 (2023) - [j8]Lizhi Zhang, Kai Lu, Zhiquan Lai, Yongquan Fu, Yu Tang, Dongsheng Li:
Accelerating GNN Training by Adapting Large Graphs to Distributed Heterogeneous Architectures. IEEE Trans. Computers 72(12): 3473-3488 (2023) - [j7]Ning Liu, Songlei Jian, Dongsheng Li, Yiming Zhang, Zhiquan Lai, Hongzuo Xu:
Hierarchical Adaptive Pooling by Capturing High-Order Dependency for Graph Representation Learning. IEEE Trans. Knowl. Data Eng. 35(4): 3952-3965 (2023) - [j6]Zhiquan Lai, Shengwei Li, Xudong Tang, Keshi Ge, Weijie Liu, Yabo Duan, Linbo Qiao, Dongsheng Li:
Merak: An Efficient Distributed DNN Training Framework With Automated 3D Parallelism for Giant Foundation Models. IEEE Trans. Parallel Distributed Syst. 34(5): 1466-1478 (2023) - [j5]Peng Liang, Yu Tang, Xiaoda Zhang, Youhui Bai, Teng Su, Zhiquan Lai, Linbo Qiao, Dongsheng Li:
A Survey on Auto-Parallelism of Large-Scale Deep Learning Training. IEEE Trans. Parallel Distributed Syst. 34(8): 2377-2390 (2023) - [c27]Wei Wang, Zhiquan Lai, Shengwei Li, Weijie Liu, Keshi Ge, Yujie Liu, Ao Shen, Dongsheng Li:
Prophet: Fine-grained Load Balancing for Parallel Training of Large-scale MoE Models. CLUSTER 2023: 82-94 - [c26]Hongyu Chen, Zhejiang Ran, Keshi Ge, Zhiquan Lai, Jingfei Jiang, Dongsheng Li:
Auto-Divide GNN: Accelerating GNN Training with Subgraph Division. Euro-Par 2023: 367-382 - [c25]Yujie Liu, Zhiquan Lai, Weijie Liu, Wei Wang, Dongsheng Li:
Efficient Large Models Fine-tuning on Commodity Servers via Memory-balanced Pipeline Parallelism. HPCC/DSS/SmartCity/DependSys 2023: 726-727 - [c24]Zhiquan Lai, Yanqi Hao, Shengwei Li, Dongsheng Li:
Communication Analysis for Multidimensional Parallel Training of Large-scale DNN Models. HPCC/DSS/SmartCity/DependSys 2023: 728-729 - [c23]Zhiquan Lai, Yujie Liu, Wei Wang, Yanqi Hao, Dongsheng Li:
Rethinking the Distributed DNN Training Cluster Design from the Cost-effectiveness View. HPCC/DSS/SmartCity/DependSys 2023: 730-731 - [c22]Yuanyuan Xiao, Zhiquan Lai, Dongsheng Li:
CD-Sched: An Automated Scheduling Framework for Accelerating Neural Network Training on Shared Memory CPU-DSP Platforms. PCCNT 2023: 41:1-41:6 - [i7]Shengwei Li, Zhiquan Lai, Yanqi Hao, Weijie Liu, Keshi Ge, Xiaoge Deng, Dongsheng Li, Kai Lu:
Automated Tensor Model Parallelism with Overlapped Communication for Efficient Foundation Model Training. CoRR abs/2305.16121 (2023) - 2022
- [j4]Keshi Ge, Zhejiang Ran, Zhiquan Lai, Lizhi Zhang, Dongsheng Li:
BRGraph: An efficient graph neural network training system by reusing batch data on GPU. Concurr. Comput. Pract. Exp. 34(15) (2022) - [c21]Weijie Liu, Zhiquan Lai, Shengwei Li, Yabo Duan, Keshi Ge, Dongsheng Li:
AutoPipe: A Fast Pipeline Parallelism Approach with Balanced Partitioning and Micro-batch Slicing. CLUSTER 2022: 301-312 - [c20]Yabo Duan, Zhiquan Lai, Shengwei Li, Weijie Liu, Keshi Ge, Peng Liang, Dongsheng Li:
HPH: Hybrid Parallelism on Heterogeneous Clusters for Accelerating Large-scale DNNs Training. CLUSTER 2022: 313-323 - [c19]Keshi Ge, Yongquan Fu, Yiming Zhang, Zhiquan Lai, Xiaoge Deng, Dongsheng Li:
S2 Reducer: High-Performance Sparse Communication to Accelerate Distributed Deep Learning. ICASSP 2022: 5233-5237 - [c18]Shengwei Li, Zhiquan Lai, Dongsheng Li, Yiming Zhang, Xiangyu Ye, Yabo Duan:
EmbRace: Accelerating Sparse Communication for Distributed Training of Deep Neural Networks. ICPP 2022: 7:1-7:11 - [c17]Yuqi He, Zhiquan Lai, Zhejiang Ran, Lizhi Zhang, Dongsheng Li:
SCGraph: Accelerating Sample-based GNN Training by Staged Caching of Features on GPUs. ISPA/BDCloud/SocialCom/SustainCom 2022: 106-113 - [c16]Yuqi He, Zhiquan Lai, Zhejiang Ran, Lizhi Zhang, Dongsheng Li:
Accelerating Sample-based GNN Training by Feature Caching on GPUs. SmartCloud 2022: 163-164 - [i6]Yu Tang, Chenyu Wang, Yufan Zhang, Yuliang Liu, Xingcheng Zhang, Linbo Qiao, Zhiquan Lai, Dongsheng Li:
DELTA: Dynamically Optimizing GPU Memory beyond Tensor Recomputation. CoRR abs/2203.15980 (2022) - [i5]Zhiquan Lai, Shengwei Li, Xudong Tang, Keshi Ge, Weijie Liu, Yabo Duan, Linbo Qiao, Dongsheng Li:
Merak: An Efficient Distributed DNN Training Framework with Automated 3D Parallelism for Giant Foundation Models. CoRR abs/2206.04959 (2022) - 2021
- [j3]Dongsheng Li, Zhiyao Hu, Zhiquan Lai, Yiming Zhang, Kai Lu:
Coordinative Scheduling of Computation and Communication in Data-Parallel Systems. IEEE Trans. Computers 70(12): 2182-2197 (2021) - [c15]Lizhi Zhang, Zhiquan Lai, Shengwei Li, Yu Tang, Feng Liu, Dongsheng Li:
2PGraph: Accelerating GNN Training over Large Graphs on GPU Clusters. CLUSTER 2021: 103-113 - [c14]Keshi Ge, Yiming Zhang, Yongquan Fu, Zhiquan Lai, Xiaoge Deng, Dongsheng Li:
CASQ: Accelerate Distributed Deep Learning with Sketch-Based Gradient Quantization. CLUSTER 2021: 825-826 - [c13]Xiangyu Ye, Zhiquan Lai, Dongsheng Li:
Prediction of the Cyanobacteria Coverage in Time-series Images based on Convolutional Neural Network. ICCCV 2021: 153-158 - [c12]Yuetong Yang, Zhiquan Lai, Lei Cai, Dongsheng Li:
HMA: An Efficient Training Method for NLP Models. ICIAI 2021: 20-25 - [c11]Xiangyu Ye, Zhiquan Lai, Shengwei Li, Lei Cai, Ding Sun, Linbo Qiao, Dongsheng Li:
Hippie: A Data-Paralleled Pipeline Approach to Improve Memory-Efficiency and Scalability for Large DNN Training. ICPP 2021: 71:1-71:10 - [c10]Zhejiang Ran, Zhiquan Lai, Lizhi Zhang, Dongsheng Li:
Accelerate Graph Neural Network Training by Reusing Batch Data on GPUs. IPCCC 2021: 1-8 - [c9]Lizhi Zhang, Zhiquan Lai, Yu Tang, Dongsheng Li, Feng Liu, Xiaochun Luo:
PCGraph: Accelerating GNN Inference on Large Graphs via Partition Caching. ISPA/BDCloud/SocialCom/SustainCom 2021: 279-287 - [i4]Ning Liu, Songlei Jian, Dongsheng Li, Yiming Zhang, Zhiquan Lai, Hongzuo Xu:
Hierarchical Adaptive Pooling by Capturing High-order Dependency for Graph Representation Learning. CoRR abs/2104.05960 (2021) - [i3]Keshi Ge, Yongquan Fu, Zhiquan Lai, Xiaoge Deng, Dongsheng Li:
S2 Reducer: High-Performance Sparse Communication to Accelerate Distributed Deep Learning. CoRR abs/2110.02140 (2021) - [i2]Shengwei Li, Zhiquan Lai, Dongsheng Li, Xiangyu Ye, Yabo Duan:
EmbRace: Accelerating Sparse Communication for Distributed Training of NLP Neural Networks. CoRR abs/2110.09132 (2021) - 2020
- [c8]Yuetong Yang, Zhiquan Lai, Lei Cai, Dongsheng Li:
Poster Abstract: Model Average-based Distributed Training for Sparse Deep Neural Networks. INFOCOM Workshops 2020: 1346-1347 - [c7]Yu Tang, Zhigang Kan, Dequan Sun, Linbo Qiao, Jingjing Xiao, Zhiquan Lai, Dongsheng Li:
ADMMiRNN: Training RNN with Stable Convergence via an Efficient ADMM Approach. ECML/PKDD (2) 2020: 3-18 - [i1]Yu Tang, Zhigang Kan, Dequan Sun, Linbo Qiao, Jingjing Xiao, Zhiquan Lai, Dongsheng Li:
ADMMiRNN: Training RNN with Stable Convergence via An Efficient ADMM Approach. CoRR abs/2006.05622 (2020)
2010 – 2019
- 2019
- [c6]Dongsheng Li, Zhiquan Lai, Keshi Ge, Yiming Zhang, Zhaoning Zhang, Qinglin Wang, Huaimin Wang:
HPDL: Towards a General Framework for High-performance Distributed Deep Learning. ICDCS 2019: 1742-1753 - 2017
- [j2]Zhiquan Lai, King Tin Lam, Cho-Li Wang, Jinshu Su:
PoweRock: Power Modeling and Flexible Dynamic Power Management for Many-Core Architectures. IEEE Syst. J. 11(2): 600-612 (2017) - [c5]Yan Zhu, Guidong Zhang, Zhiquan Lai, Boya Niu, Yongjun Shen:
A Two-Tiered Defence of Techniques to Prevent SQL Injection Attacks. IMIS 2017: 286-295 - 2015
- [j1]Zhiquan Lai, King Tin Lam, Cho-Li Wang, Jinshu Su:
Latency-aware DVFS for efficient power state transitions on many-core architectures. J. Supercomput. 71(7): 2720-2747 (2015) - 2014
- [c4]King Tin Lam, Jinghao Shi, Dominic Hung, Cho-Li Wang, Zhiquan Lai, Wangbin Zhu, Youliang Yan:
Rhymes: A shared virtual memory system for non-coherent tiled many-core architectures. ICPADS 2014: 183-190 - [c3]Zhiquan Lai, Baokang Zhao, Jinshu Su:
Efficient DVFS to Prevent Hard Faults for Many-Core Architectures. ICT-EurAsia 2014: 674-679 - [c2]Zhiquan Lai, King Tin Lam, Cho-Li Wang, Jinshu Su:
A Power Modelling Approach for Many-Core Architectures. SKG 2014: 128-132 - 2012
- [c1]Lin-Bo Qiao, Bo-Feng Zhang, Zhiquan Lai, Jinshu Su:
Mining of Attack Models in IDS Alerts from Network Backbone by a Two-stage Clustering Method. IPDPS Workshops 2012: 1263-1269
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-09-21 23:36 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint