Accessibility navigation


Hidden interlocutor misidentification in practical Turing Tests

Shah, H. and Warwick, K. (2010) Hidden interlocutor misidentification in practical Turing Tests. Minds and Machines, 20 (3). pp. 441-454. ISSN 0924-6495

Full text not archived in this repository.

It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing.

To link to this item DOI: 10.1007/s11023-010-9219-6

Abstract/Summary

Based on insufficient evidence, and inadequate research, Floridi and his students report inaccuracies and draw false conclusions in their Minds and Machines evaluation, which this paper aims to clarify. Acting as invited judges, Floridi et al. participated in nine, of the ninety-six, Turing tests staged in the finals of the 18th Loebner Prize for Artificial Intelligence in October 2008. From the transcripts it appears that they used power over solidarity as an interrogation technique. As a result, they were fooled on several occasions into believing that a machine was a human and that a human was a machine. Worse still, they did not realise their mistake. This resulted in a combined correct identification rate of less than 56%. In their paper they assumed that they had made correct identifications when they in fact had been incorrect.

Item Type:Article
Refereed:Yes
Divisions:Science
ID Code:17368
Uncontrolled Keywords:18th Loebner prize for artificial intelligence, Confederate effect, Elbot, Eliza effect, Gender-blurring effect, Jury-service, Parallel-paired, Practical Turing tests, Turing’s imitation game
Publisher:Springer

University Staff: Request a correction | Centaur Editors: Update this record

Page navigation