Collaterally cued labelling framework underpinning semantic-level visual content descriptor
Zhu, M. and Badii, A. (2007) Collaterally cued labelling framework underpinning semantic-level visual content descriptor. Lecture Notes in Computer Science, 4781. pp. 379-390. ISSN 0302-9743 978-3-540-76413-7
Full text not archived in this repository.
To link to this article DOI: 10.1007/978-3-540-76414-4
In this paper, we introduce a novel high-level visual content descriptor devised for performing semantic-based image classification and retrieval. The work can be treated as an attempt for bridging the so called "semantic gap". The proposed image feature vector model is fundamentally underpinned by an automatic image labelling framework, called Collaterally Cued Labelling (CCL), which incorporates the collateral knowledge extracted from the collateral texts accompanying the images with the state-of-the-art low-level visual feature extraction techniques for automatically assigning textual keywords to image regions. A subset of the Corel image collection was used for evaluating the proposed method. The experimental results indicate that our semantic-level visual content descriptors outperform both conventional visual and textual image feature models.