*Result*: Prompt-guided consistency learning for multi-label classification with incomplete labels.

Title:
Prompt-guided consistency learning for multi-label classification with incomplete labels.
Authors:
Wang S; School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, 430074, China; Key Lab of Image Processing and Intelligent Control, Ministry of Education, Wuhan, 430074, China., Wan Q; Independent Researcher, Wuhan, 430074, China., Zhang Z; School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, 430074, China; Key Lab of Image Processing and Intelligent Control, Ministry of Education, Wuhan, 430074, China., Zeng Z; School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, 430074, China; Key Lab of Image Processing and Intelligent Control, Ministry of Education, Wuhan, 430074, China. Electronic address: zgzeng@hust.edu.cn.
Source:
Neural networks : the official journal of the International Neural Network Society [Neural Netw] 2025 Oct; Vol. 190, pp. 107604. Date of Electronic Publication: 2025 May 26.
Publication Type:
Journal Article
Language:
English
Journal Info:
Publisher: Pergamon Press Country of Publication: United States NLM ID: 8805018 Publication Model: Print-Electronic Cited Medium: Internet ISSN: 1879-2782 (Electronic) Linking ISSN: 08936080 NLM ISO Abbreviation: Neural Netw Subsets: MEDLINE
Imprint Name(s):
Original Publication: New York : Pergamon Press, [c1988-
Contributed Indexing:
Keywords: Confirmation bias; Contrastive learning; Incomplete labels; Multi-label classification; Pseudo-labeling; Semantic decoupling
Entry Date(s):
Date Created: 20250605 Date Completed: 20250814 Latest Revision: 20250814
Update Code:
20260130
DOI:
10.1016/j.neunet.2025.107604
PMID:
40472581
Database:
MEDLINE

*Further Information*

*Addressing insufficient supervision and improving model generalization are essential for multi-label classification with incomplete annotations, i.e., partial and single positive labels. Recent studies incorporate pseudo-labels to provide additional supervision and enhance model generalization. However, the noise in pseudo-labels generated by the model tends to accumulate, resulting in confirmation bias during training. Self-correction methods, commonly used approaches for mitigating confirmation bias, rely on model predictions but remain susceptible to confirmation bias caused by visual confusion, including both visual ambiguity and similarity. To reduce visual confusion, we propose a prompt-guided consistency learning (PGCL) framework designed for two incomplete labeling settings. Specifically, we introduce an intra-category supervised contrastive loss, which imposes consistency constraints on reliable positive class samples in the feature space of each category, rather than across the feature space of all categories, as in traditional inter-category supervised contrastive loss. Building on this, the distinction between true positive and visual confusion samples for each category is enhanced through label-level contrasting of the same category. Additionally, we develop a class-specific semantic decoupling module that leverages CLIP's strong vision-language alignment capability, since the proposed contrastive loss requires high-quality label-level representations as contrastive samples. Extensive experimental results on multiple datasets demonstrate that our method can effectively address the problems of two incomplete labeling settings and achieve state-of-the-art performance.
(Copyright © 2025 Elsevier Ltd. All rights reserved.)*

*Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.*