Andrew Ortony, Professor Psyc, Northwestern Univ

Andrew Ortony, Professor Psyc, Northwestern Univ


 If, in the domain of emotions, a computer could lead a human to believe that it (the computer) was a human, one could argue that the computer had passed the emotional analog of the Turing Test. One approach to doing this would be for the human to pose questions to the machine based on a well-reputed test (the MSCEIT) of Emotional Intelligence (EI). If the computer's performance were comparable to those of most humans, it would pass the test and, by definition, it would have to be designated as being emotionally intelligent. Or would it?

 In this talk I shall give reasons for concluding that today's computers could perform quite well on such an "Emotional Turing Test," but that the conclusion to be drawn from such an outcome would not be that such computers are emotionally intelligent. Rather, in addition to the much-discussed question of the criterion for "correct" answers, it raises two other crucial questions: (1) what do current tests of emotional intelligence really measure? and (2) how ought we to conceive of the notion of EI? As regards the first of these questions, I shall argue that such tests mostly measure declarative knowledge and common associations. With respect to the second, I shall argue that because emotions typically involve the integration of affect, motivation, cognition, and behavior, a valid measure of EI needs to be sensitive to all of these constituents.

 Link to paper:

The Science of Emotional Intelligence