*Result*: FAST: Foreground-aware active self-training for domain adaptive object detection.
*Further Information*
*Domain adaptive object detection (DAOD) aims to enable object detectors to perform well on an unlabeled target domain that differs from the source domain used for training. Among various approaches, mean-teacher self-training has emerged as a promising framework in DAOD. However, the noisy pseudo-labels generated by the teacher model constrain its potential for further performance improvements, making it challenging to achieve fully supervised performance. While annotating all target samples is prohibitively expensive, labeling a small subset is often acceptable. Active domain adaptation (ADA) therefore serves as promising way to alleviate this issue by selectively annotating the most informative target samples to maximize performance gains with minimal annotation cost. However, its application to DAOD remains underexplored. This paper proposes Foreground-aware Active Self-Training (FAST), establishing an effective framework for active DAOD. Specifically, FAST introduces two innovative sampling strategies: foreground diversity clustering sampling (FDCS) to maximize the diversity of selected foreground objects, and teacher-student discrepancy uncertainty sampling (TDUN) to identify samples with high prediction uncertainty. These strategies are implemented within a decoupled active learning paradigm that employs a dedicated sampling model to identify the most informative target samples. By incorporating the selected samples into the mean-teacher framework, FAST significantly improves detection performance on the target domain. Extensive experiments demonstrate that our method achieves superior performance across multiple DAOD datasets, showcasing its effectiveness in bridging the domain gap in challenging scenarios.
(Copyright © 2025 Elsevier Ltd. All rights reserved.)*
*Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. All authors have contributed substantially to this work and have approved the final manuscript.*