- Published on
How computers can sense and understand emotion
- Authors
- Name
- Dorothy Hardy
- COG-MHEAR Research Programme Manager
The COG-MHEAR teams were pleased to welcome Prof Hui Yu, the head of the Visual Computing Group at the University of Portsmouth to talk about the way in which computers can sense and understand human emotion. This is a key area of development as human-machine interaction becomes increasingly sophisticated. But how can complex emotions be ‘read’ by a computer?
The research is about perceiving facial behaviour and capturing the details of muscle movement to develop an understanding of non-verbal information from social interaction and emotion. The researchers are investigating why facial expressions are happening and how other humans perceive them. They would like to work out how expressions connect to mental states.
Facial muscles are divided into groups. Each group of muscles is divided into action units. Each facial expression can be described using different action units. Prof Yu explained that as wearable devices become smaller and lighter, wearable emotion sensing can develop. The contraction of facial muscles is measured using electromyography (EMG) sensors that can be incorporated into virtual reality (VR) headsets. But there is a problem: a VR headset covers only half the face. An extra camera can be attached to the headset and used to capture lower facial movement. This is an extra burden for the user; not user friendly and not easy to use. To avoid needing this extra camera, the researchers have worked out ways of inferring what is happening to the rest of the face using readings from sensors on the upper part of the face. Emotions such as happiness, surprise and fear involve similar groups of facial muscles, so statistical methods are used to review the readings. Realtime access to a database enables calculation of the most likely emotion. As with many things, in the laboratory with a front view of a face, this works well. In daily life, with cameras placed around a room, it is harder to detect facial expression. The work goes on.
There is a demand for realism and fidelity in augmented reality and immersive experiences. The research feeds directly into this. Modelling of expressions started over 40 years ago and has become increasingly realistic. The aim is to improve the user experience. But ‘uncanny valley’ experiences can cause problems. This is where nearly but not quite realistic depictions of humans can be very off putting. Cartoon characters are acceptable. Models that look almost but not quite lifelike are often not. So depictions of anatomy and movement need to be developed to a very high level of fidelity to give a comfortable viewing experience. The research is leading to greater understanding of the way in which human emotions are shown in faces, and better ways of simulating this.
This work was supported in part by Engineering and Physical Sciences Research Council Grant (EP/N025849/1) and in part by the Royal Academy of Engineering Grant (IFS1819\9) and Emteq Ltd.