UoR at SemEval-2020 task 4: pre-trained sentence transformer models for commonsense validation and explanationMarkchom, T., Dhruva, B., Pravin, C. and Liang, H. (2020) UoR at SemEval-2020 task 4: pre-trained sentence transformer models for commonsense validation and explanation. In: International Workshop on Semantic Evaluation 2020, December 12-13, 2020, Barcelona, Spain, pp. 430-436.
It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing. Official URL: https://www.aclweb.org/anthology/2020.semeval-1.52 Abstract/SummarySemEval Task 4 Commonsense Validation and Explanation Challenge is to validate whether a system can differentiate natural language statements that make sense from those that do not make sense. Two subtasks, A and B, are focused in this work, i.e., detecting against-common-sense statements and selecting explanations of why they are false from the given options. Intuitively, commonsense validation requires additional knowledge beyond the given statements. Therefore, we propose a system utilising pre-trained sentence transformer models based on BERT, RoBERTa and DistillBERT architectures to embed the statements before classification. According to the results, these embeddings can improve the performance of the typical MLP and LSTM classifiers as downstream models of both subtasks compared to regular tokenised statements. These embedded statements are shown to comprise additional information from external resources which help validate common sense in natural language.
Download Statistics DownloadsDownloads per month over past year Deposit Details University Staff: Request a correction | Centaur Editors: Update this record |