원문정보
초록
영어
Semi-supervised learning (SSL) has shown great promise in utilizing both labeled and unlabeled data, especially using consistency regularization To enhance the model's learning efficiency, we proposed a dualbranch co-training approach. After dividing the training data into two subsets, each branch utilizes a teacherstudent model pair, where the teacher's weights are updated via an exponential moving average (EMA). The framework combines supervised loss and unsupervised loss to optimize the model. Upon label expansion (e.g pseudo labeling), an additional otherside view is introduced, promoting agreement between the branches' predictions on shared data. This loss mitigates errors arising from incorrect pseudo-labels and enhances the overall robustness of the training process. By dynamically adjusting pseudo-label inclusion based on confidence thresholds, our methodology reduces the impact of noisy data and prevents overfitting. As a result, we could demonstrate the effectiveness of the proposed method in leveraging unlabeled data while maintaining high performance.
목차
1. Introduction
2. Related Research
2.1 Consistency Regularization
3. Breast Ultrasound Images
4. Otherside View Co-training
5. Experiments and Results
6. Conclusion
Acknowledgement
References
