Accessibility navigation


Semantic-level Visual Content Descriptor combining visual and textual cues as collaterals

Zhu, M. (2008) Semantic-level Visual Content Descriptor combining visual and textual cues as collaterals. In: SSE Systems Engineering Conference 2008, 25-26 Sep 2008, The University of Reading. (Unpublished)

[thumbnail of phd_annual_2008_zhu_imss.doc] Text - Other
· Please see our End User Agreement before downloading.

588kB
[thumbnail of phd_annual_2008_zhu_badii_imss.doc] Text - Accepted Version
· Please see our End User Agreement before downloading.

501kB

It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing.

Abstract/Summary

In this paper, we introduce a novel high-level visual content descriptor which is devised for performing semantic-based image classification and retrieval. The work can be treated as an attempt to bridge the so called “semantic gap”. The proposed image feature vector model is fundamentally underpinned by the image labelling framework, called Collaterally Confirmed Labelling (CCL), which incorporates the collateral knowledge extracted from the collateral texts of the images with the state-of-the-art low-level image processing and visual feature extraction techniques for automatically assigning linguistic keywords to image regions. Two different high-level image feature vector models are developed based on the CCL labelling of results for the purposes of image data clustering and retrieval respectively. A subset of the Corel image collection has been used for evaluating our proposed method. The experimental results to-date already indicates that our proposed semantic-based visual content descriptors outperform both traditional visual and textual image feature models.

Item Type:Conference or Workshop Item (Paper)
Refereed:No
Divisions:Science
ID Code:1070

Downloads

Downloads per month over past year

University Staff: Request a correction | Centaur Editors: Update this record

Page navigation