Accessibility navigation


Collaterally cued labelling framework underpinning semantic-level visual content descriptor

Zhu, M. and Badii, A. (2007) Collaterally cued labelling framework underpinning semantic-level visual content descriptor. Lecture Notes in Computer Science, 4781. pp. 379-390. ISSN 0302-9743 978-3-540-76413-7

Full text not archived in this repository.

It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing.

To link to this item DOI: 10.1007/978-3-540-76414-4

Abstract/Summary

In this paper, we introduce a novel high-level visual content descriptor devised for performing semantic-based image classification and retrieval. The work can be treated as an attempt for bridging the so called "semantic gap". The proposed image feature vector model is fundamentally underpinned by an automatic image labelling framework, called Collaterally Cued Labelling (CCL), which incorporates the collateral knowledge extracted from the collateral texts accompanying the images with the state-of-the-art low-level visual feature extraction techniques for automatically assigning textual keywords to image regions. A subset of the Corel image collection was used for evaluating the proposed method. The experimental results indicate that our semantic-level visual content descriptors outperform both conventional visual and textual image feature models.

Item Type:Article
Refereed:Yes
Divisions:Science > School of Mathematical, Physical and Computational Sciences > Department of Computer Science
ID Code:15496
Uncontrolled Keywords:automatic image annotation, fusion of visual and non-visual features, collateral knowledge, semantic-level visual content descriptor, multimodal data modelling, image indexing and retrieval, PICTURES
Additional Information:Proceedings Paper 9th International Conference on Visual Information Systems, VISUAL 2007, held in Shanghai, China, in June 2007

University Staff: Request a correction | Centaur Editors: Update this record

Page navigation