blog




  • Essay / Facial Emotion Recognition System - 1234

    2.0 Literature ReviewFace detection is a computer technology that will identify human faces in arbitrary images and human faces basically have the same basic configuration appearance, like two eyes above the nose and mouth. Once computers successfully detected faces, more research was done on face processing, including emotion recognition. 2.1 Face Acquisition In this process, the user's faces are acquired in order to extract facial features from a cluttered background. In Robust Real-time Object Detection (P.Viola, 2002), the authors used the AdaBoost algorithm to quickly detect the frontal view of faces. The system is able to quickly detect the face from the background and calculate the facial features in a short time. However, the frontal view of faces cannot always guarantee their appearance in the environment, so some researchers have considered using the side view and frontal view to detect faces. On top of that, this algorithm will fail to detect faces with a rotation of more than 10 degrees. In Expert System for Automatic Analysis of Facial Expressions (M. Pantic, 2000), the author used dual view faces which are a front face and a 90 degree face. faces straight into its system using two cameras mounted on the user's head. Furthermore, in the article Decoding of Profile Versus Full-Face Expressions of Affect (Kleck, R. & Mendolia, 1990), the authors used three side views such as full face, right and left in his system. They found that the full view and the right view were accurate in detecting positive expressions, while the left view was accurate in detecting negative expressions compared to the right view. From these papers, he proved that the system can recognize the face not only from the front, but also from the left. and right view. In order to im...... middle of paper ...... will combine the two techniques which are AdaBoost algorithm and color detection to detect human face. During the feature extraction part, the authors proposed a geometric and appearance method to extract the human face. The facial features highlight this and some authors also stated that by using both approaches combined it would increase the accuracy compared to the system that only used one approach. From there, it has already given the project system the idea that there is no harm in applying two approaches together. Finally, in the facial emotion recognition part, the article shows the strength and weakness between the HMM and the neural network. This part shows that by applying approaches that can support various combinations of AUs, better results can be generated. So this already gave a big clue to this project: avoid using techniques that are unable to support multiple combinations..