Accessibility navigation


Explainable visually-aware recommender systems based on heterogeneous information networks

Markchom, T. ORCID: https://orcid.org/0000-0002-2685-0738 (2024) Explainable visually-aware recommender systems based on heterogeneous information networks. PhD thesis, University of Reading

[img] Text - Thesis
· Restricted to Repository staff only
· The Copyright of this document has not been checked yet. This may affect its availability.

19MB
[img] Text - Thesis Deposit Form
· Restricted to Repository staff only
· The Copyright of this document has not been checked yet. This may affect its availability.

242kB

It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing.

To link to this item DOI: 10.48683/1926.00119658

Abstract/Summary

Recommender systems are crucial for addressing information overload by providing users of online platforms, applications, or services with relevant item suggestions. Visually-aware systems enhance recommendations using images in various domains, e.g., fashion and entertainment. However, most visually-aware recommender systems lack explainability due to their reliance on black-box approaches such as deep-learning techniques. This thesis addresses the need for accurate and explainable visually-aware recommender systems by integrating visual information into Heterogeneous Information Networks (HINs). Three main research questions arise: How can the effective integration of visual information into HINs be achieved? How can the development of explainable visually-aware recommender systems based on HINs be approached? What methods can be used to enhance the explainability of HIN-based recommender systems? To achieve these, this thesis proposes a three-part solution. Firstly, visually-augmented HINs are constructed by introducing visual factor nodes and relations, generated from various image features. A user representation learning method is introduced to combine semantic and visual preferences. Secondly, a Scalable and Explainable Visually-aware Recommender System (SEV-RS) framework is presented. SEV-RS utilizes meta-paths for explainability, enhancing recommendations using scalable feature extraction from visually-augmented HINs. Lastly, a meta-path translation model is introduced to improve the explainability of meta-path based systems, enhancing the performance of the proposed framework. Extensive experiments with real-world datasets in movies and clothing domains assessed the effectiveness of visually-augmented HINs. These augmented HINs, along with the proposed user representation learning method, were employed in a recommendation model using Collaborative Filtering with K-Nearest Neighbors (CF-KNN). This model was compared with other CF-KNN models based on regular HINs. The findings indicate the efficacy of visually-augmented HINs and the representation learning approach in enhancing CF-KNN. They also exhibit the practicality of using visually-augmented HINs in state-of-the-art recommender systems. The evaluation of SEV-RS involved comparisons with state-of-the-art models using real-world and synthetic datasets. The results reveal that SEV-RS yields recommendations with high accuracy and explainability while requiring notably less computational time compared to other deep-learning models. The proposed meta-path translation approach was evaluated on two newly generated datasets derived from real-world recommendation data. It was compared with various sequence-to-sequence models in the task of translating a given meta-path to a group of meta-paths with higher explainability. The experiments validate its capability to generate more comprehensible alternative explanations for complex meta-paths. These approaches create pathways for developing recommender systems that address users’ visual preferences and offer explanations based on understandable meta-paths. Implementing these methods could enhance user experiences, leading to more engaging and personalized recommender systems, with potential adaptability for improving other graph-utilizing explainable artificial intelligence applications.

Item Type:Thesis (PhD)
Thesis Supervisor:Ferryman, J.
Thesis/Report Department:School of Mathematical, Physical and Computational Sciences
Identification Number/DOI:https://doi.org/10.48683/1926.00119658
Divisions:Science > School of Mathematical, Physical and Computational Sciences > Department of Computer Science
ID Code:119658

University Staff: Request a correction | Centaur Editors: Update this record

Page navigation