Emotion-detection-master

所属分类:其他
开发工具:Python
文件大小:16556KB
下载次数:1
上传日期:2019-06-11 17:53:00
上 传 者meiyingjie
说明:  一个用于情绪检测的软件,由python语言写成
(A software for emotional detection, written in Python)

文件列表:
Keras (0, 2019-04-19)
Keras\haarcascade_frontalface_default.xml (963439, 2019-04-19)
Keras\kerasmodel.py (5052, 2019-04-19)
ResearchPaper.pdf (13524924, 2019-04-19)
TFLearn (0, 2019-04-19)
TFLearn\emojis (0, 2019-04-19)
TFLearn\emojis\angry.png (14628, 2019-04-19)
TFLearn\emojis\disgusted.png (16186, 2019-04-19)
TFLearn\emojis\fearful.png (16175, 2019-04-19)
TFLearn\emojis\happy.png (16613, 2019-04-19)
TFLearn\emojis\neutral.png (12094, 2019-04-19)
TFLearn\emojis\sad.png (16397, 2019-04-19)
TFLearn\emojis\surprised.png (14663, 2019-04-19)
TFLearn\haarcascade_frontalface_default.xml (963439, 2019-04-19)
TFLearn\model.py (3207, 2019-04-19)
TFLearn\multiface.py (1747, 2019-04-19)
TFLearn\singleface.py (3887, 2019-04-19)
accuracy.png (53524, 2019-04-19)
examples (0, 2019-04-19)
examples\angry.png (411702, 2019-04-19)
examples\fearful.png (410702, 2019-04-19)
examples\happy.png (425249, 2019-04-19)
examples\multiface.png (526272, 2019-04-19)
examples\neutral.png (421885, 2019-04-19)
examples\sad.png (424404, 2019-04-19)
examples\surprised.png (410275, 2019-04-19)
requirements.txt (81, 2019-04-19)

# Emotion-detection ## Introduction This project aims to classify the emotion on a person's face into one of **seven categories**, using deep convolutional neural networks. This repository is an implementation of [this](https://github.com/atulapra/Emotion-detection/blob/master/ResearchPaper.pdf) research paper. The model is trained on the **FER-2013** dataset which was published on International Conference on Machine Learning (ICML). This dataset consists of 35887 grayscale, 48x48 sized face images with **seven emotions** - angry, disgusted, fearful, happy, neutral, sad and surprised. ## Dependencies * Python 3.6, [OpenCV 3 or 4](https://opencv.org/), [Tensorflow](https://www.tensorflow.org/), [TFlearn](http://tflearn.org/), [Keras.](https://keras.io/) * To install the required packages, run `pip install -r requirements.txt`. ## Usage There are two versions of this repository - written using **TFLearn** and **Keras**. Usage instructions for each of these versions are given below. Both versions work equally well if you want to detect emotions only one face in the image. However, I suggest you use the keras implementation, since it provides better results if there is more than one face. * First, clone the repository with `git clone https://github.com/atulapra/Emotion-detection.git` and enter the cloned folder: `cd Emotion-detection`. ### TFLearn * Download the **trained model** files from [here](https://drive.google.com/file/d/1rdgSdMcXIvfoPmf702UCtH6RNcvkKFu7/view?usp=sharing), extract it and copy the files into the current working directory. * To run the program to detect emotions only in **one face**, type `python model.py singleface`. * To run the program to detect emotions on all faces close to camera, type `python model.py multiface`. Note that this sometimes generates incorrect predictions. * The folder structure is of the form: TFLearn: * emojis (folder) * `model.py` (file) * `multiface.py` (file) * `singleface.py` (file) * `model_1_atul.tflearn.data-00000-of-00001` (file) * `model_1_atul.tflearn.index` (file) * `model_1_atul.tflearn.meta` (file) * `haarcascade_frontalface_default.xml` (file) ### Keras * Download the FER-2013 dataset from [here](https://anonfile.com/bdj3tfoeba/data_zip) and unzip it inside the Keras folder. This will create the folder `data`. * If you want to train this model or train after making changes to the model, use `python kerasmodel.py --mode train`. * If you want to view the predictions without training again, you can download my pre-trained model `(model.h5)` from [here](https://drive.google.com/file/d/1FUn0XNOzf-nQV7QjbBPA6-8GLoHNNgv-/view?usp=sharing) and then run `python kerasmodel.py --mode display`. * The folder structure is of the form: Keras: * data (folder) * `kerasmodel.py` (file) * `haarcascade_frontalface_default.xml` (file) * `model.h5` (file) * This implementation by default detects emotions on all faces in the webcam feed. * With a simple 4-layer CNN, the test accuracy stopped increasing at around 50 epochs at an accuracy of 63.2%. ![Accuracy plot](accuracy.png) ## Algorithm * First, we use **haar cascade** to detect faces in each frame of the webcam feed. * The region of image containing the face is resized to **48x48** and is passed as input to the ConvNet. * The network outputs a list of **softmax scores** for the seven classes. * The emotion with maximum score is displayed on the screen. ## Example Outputs ![One face](examples/happy.png) ![Mutiface](examples/multiface.png) ## References * "Challenges in Representation Learning: A report on three machine learning contests." I Goodfellow, D Erhan, PL Carrier, A Courville, M Mirza, B Hamner, W Cukierski, Y Tang, DH Lee, Y Zhou, C Ramaiah, F Feng, R Li, X Wang, D Athanasakis, J Shawe-Taylor, M Milakov, J Park, R Ionescu, M Popescu, C Grozea, J Bergstra, J Xie, L Romaszko, B Xu, Z Chuang, and Y. Bengio. arXiv 2013. * [Emotion-recognition-neural-networks](https://github.com/isseu/emotion-recognition-neural-networks)

近期下载者

相关文件


收藏者