Papers
arxiv:2507.14102

UGPL: Uncertainty-Guided Progressive Learning for Evidence-Based Classification in Computed Tomography

Published on Jul 18
ยท Submitted by shravvvv on Jul 22
Authors:
,

Abstract

Accurate classification of computed tomography (CT) images is essential for diagnosis and treatment planning, but existing methods often struggle with the subtle and spatially diverse nature of pathological features. Current approaches typically process images uniformly, limiting their ability to detect localized abnormalities that require focused analysis. We introduce UGPL, an uncertainty-guided progressive learning framework that performs a global-to-local analysis by first identifying regions of diagnostic ambiguity and then conducting detailed examination of these critical areas. Our approach employs evidential deep learning to quantify predictive uncertainty, guiding the extraction of informative patches through a non-maximum suppression mechanism that maintains spatial diversity. This progressive refinement strategy, combined with an adaptive fusion mechanism, enables UGPL to integrate both contextual information and fine-grained details. Experiments across three CT datasets demonstrate that UGPL consistently outperforms state-of-the-art methods, achieving improvements of 3.29%, 2.46%, and 8.08% in accuracy for kidney abnormality, lung cancer, and COVID-19 detection, respectively. Our analysis shows that the uncertainty-guided component provides substantial benefits, with performance dramatically increasing when the full progressive learning pipeline is implemented. Our code is available at: https://github.com/shravan-18/UGPL

Community

Paper author Paper submitter

๐Ÿง  This paper introduces UGPL: a method for guiding CT image classification by leveraging uncertainty estimates to focus analysis on ambiguous regions through progressive, multi-scale refinement.

๐Ÿ  Project Page: https://shravan-18.github.io/UGPL/
๐Ÿ’ป Code: https://github.com/shravan-18/UGPL
๐Ÿ“„ arXiv: https://arxiv.org/abs/2507.14102

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2507.14102 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2507.14102 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2507.14102 in a Space README.md to link it from this page.

Collections including this paper 2