Automatic facial expressions analysis during speech communication
Keywords:
FACS, emotions, affective computing, facial expressionsAbstract
In the article that follows, we are presenting the readers with the results of our research conducted as part of development of a computer system intended for facial expressions analysis.Here we are emphasizing certain problems related to peculiarities of facial expressions categorizing while carrying out automated analysis. The problems stem from, first, articulatory movements and FACS action units (AUs) time-linked overlap; and second,from the necessity to differentiate mimic events varying in meaning, i.e., to separate action units and their combinations as basic emotions indexes, semantic stressingof a speech message,emotion emblems, and mimic dialects.
We have managed to secure high precision of facial expressions analysis outcomes, applying the so-called FACS-based comprehensive approach, instead of the selective approach,considering core flawsof the latter. We have developed EmoRadar, a computer system specializing in analysing videos; it functions based on direct analysis of facial surface, relying on original protocols of computer vision and is, in fact, an implementation of computer FACS concepts.
The software empirical testing has revealed the necessity to consider specifics of detection and categorization of certain action units and their combinations against the background of articulatory movements during speech communication.Differentiating mimic events varied in meaning, in our opinion, is possible only when based on high-precision analysis of the time of emergence and ending of the action units that are part of mimic events.