Multiple Tasks-Based Multi-Source Domain Adaptation Using Divide-and-Conquer Strategy

BH Ngo, YJ Chae, SJ Park, JH Kim, SI Cho - IEEE Access, 2023 - ieeexplore.ieee.org
IEEE Access, 2023ieeexplore.ieee.org
In single-source unsupervised domain adaptation (SUDA), it is often assumed that a single-
source domain can cover all target domain features. However, the limitation of labeled
samples means that a model trained on a labeled source domain cannot always cover all
target representations in practice. Therefore, multi-source unsupervised domain adaptation
(MSUDA) has recently become an attractive topic because it can provide richer information
than SUDA. In the MSUDA setting, multiple labeled source datasets and an unlabeled target …
In single-source unsupervised domain adaptation (SUDA), it is often assumed that a single-source domain can cover all target domain features. However, the limitation of labeled samples means that a model trained on a labeled source domain cannot always cover all target representations in practice. Therefore, multi-source unsupervised domain adaptation (MSUDA) has recently become an attractive topic because it can provide richer information than SUDA. In the MSUDA setting, multiple labeled source datasets and an unlabeled target dataset are available. The differently labeled source domains follow distinct distributions to provide different contributions to the target domain. Therefore, when combining multiple source domains into one source domain, the model tends to focus on whichever source domain makes a dominant contribution to the target domain, which induces bias in learning in the MSUDA setting. To solve this problem, this paper proposes a divide-and-conquer-based MSUDA framework that divides the MSUDA problem into multiple tasks (SUDAs) that it then conquers using multiple task-specific models. Each task is a pair that consists of a single source domain and a target domain, and the tasks provide different views on the target domain because each task has a different source domain. Then, they cooperate to supplement their knowledge via collaborative learning. This cooperation between multiple views can suppress noisy information and preserve critical information, thus mitigating the negative transfer problem during DA and significantly boosting the classification accuracy on the target domain as a result. The proposed method achieved state-of-the-art performance on several real-world visual domain adaptation datasets.
ieeexplore.ieee.org
Showing the best result for this search. See all results