Semantic-level Visual Content Descriptor combining visual and textual cues as collateralsZhu, M. (2008) Semantic-level Visual Content Descriptor combining visual and textual cues as collaterals. In: SSE Systems Engineering Conference 2008, 25-26 Sep 2008, The University of Reading. (Unpublished) This is the latest version of this item.
It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing. Abstract/SummaryIn this paper, we introduce a novel high-level visual content descriptor which is devised for performing semantic-based image classification and retrieval. The work can be treated as an attempt to bridge the so called “semantic gap”. The proposed image feature vector model is fundamentally underpinned by the image labelling framework, called Collaterally Confirmed Labelling (CCL), which incorporates the collateral knowledge extracted from the collateral texts of the images with the state-of-the-art low-level image processing and visual feature extraction techniques for automatically assigning linguistic keywords to image regions. Two different high-level image feature vector models are developed based on the CCL labelling of results for the purposes of image data clustering and retrieval respectively. A subset of the Corel image collection has been used for evaluating our proposed method. The experimental results to-date already indicates that our proposed semantic-based visual content descriptors outperform both traditional visual and textual image feature models.
Download Statistics DownloadsDownloads per month over past year Deposit Details Available Versions of this Item
University Staff: Request a correction | Centaur Editors: Update this record |