• Order
  • USA
  • Offers
  • Support
    • Due to unforeseen circumstances, our phone line will be unavailable from 5pm to 9pm GMT on Thursday, 28th March. Please be assured that orders will continue to be processed as usual during this period. For any queries, you can still contact us through your customer portal, where our team will be ready to assist you.

      March 28, 2024

  • Sign In

Disclaimer: This is an example of a student written essay.
Click here for sample essays written by our professional writers.

Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UKEssays.com.

Emotion Detection using Deep Learning

Paper Type: Free Essay Subject: Computer Science
Wordcount: 2129 words Published: 18th May 2020

Reference this

Emotion Detection using Deep Learning

Project Description

Face expression recognition has become an interesting area of research in computer vision and one of the most successful applications of image analysis and understanding. Some examples of today’s applications using these processes include Apple’s Face ID, medical imaging include checking dilation of eyes and monitor patients etc. Advances in computer vision research will provide useful insights to neuroscientists and psychologists into how human brain works, and vice versa.

Get Help With Your Essay

If you need assistance with writing your essay, our professional essay writing service is here to help!

Essay Writing Service

The scope of this project is to successfully achieve emotion recognition with the help of images as well as videos as data inputs. Emotion recognition is our scope of interest which is the process of indicating the human behavior through emotions. These emotions are noticed with the help of integrated information such as facial expressions, gestures and verbal speech. Emotion detection has been a part of diverse applications around the globe. The main goal of the project is to detect the emotions using convolutional neural network with highest accuracy.

Introduction

 

Facial expressions play an important role in recognition of emotions and are used in the process of non-verbal communication, as well as to identify people. They are very important in daily emotional communication, just next to the tone of voice [1]. They are also an indicator of feelings, allowing a man to express an emotional state [2,3]. People can immediately recognize an emotional state of a person. As a consequence, information on the facial expressions are often used in automatic systems of emotion recognition [4]. The aim of the project, presented in this proposal, is to recognize seven basic emotional states: neutral, joy, surprise, anger, sadness, fear and disgust based on facial expressions.

Challenges:

Pose variation, illumination conditions, bad lighting etc., are still challenging factors faced by all algorithms. Face recognition and emotion detection system are the major applications of recognition system, in which many algorithms have tried to solve these problems. The face recognition is the basic part in modern authentication/identification applications. The accuracy of this system should be high for better results. Fisherface [1] algorithm presents high accurate approach for face recognition; it performs two classes of analyses to achieve recognition i.e. principal component analysis (PCA) and linear discriminant analysis (LDA) respectively.

 

Background

In this section, we will discuss the different ways by which we can determine the face from a source.

Technology Stack:

•          OpenCV

•          Dlib – facial landmark detection

•          Keras

•          Python

•          Jupyter Notebook

•          Javascript

•          Raspberry Pi(optional)

 

Features and Methodology:

 

Research and various publications suggest there are various ways to detect objects and subjects in an image. Haar cascades and Deep Learning face detector are a few approaches that can be used in our scenario. Haar cascades method is used to identify a face in an image. If the face exists in the image, then this method is the fastest way to detect the location of a face. Deep learning face detector will give better accuracy. Both of these methodologies will be applied to the dataset to understand which accomplish better and gives best results as the project moves forward.

Haar cascades will be the faster of the 2 methods, but the deep learning face detector will give better accuracy. Since we are looking to detect areas that will have more variations in viewing angles, using the deep learning detector makes more sense. If the faces are positioned “straight on” then the Haar cascades will be sufficient. Both these techniques will be applied to see which performs better and gives better results as the project goes on.

Object detection with deep learning:

When combined together Single Shot Detectors and Mobile Nets can be used for superfast, real-time object detection. From there we use OpenCV’s dnn module to load a pre-trained object discovery network. This enables us to pass input images or video through the network and obtain the output bounding selection (x, y)-coordinates of each object in the image. Finally, we’ll analyze the results of applying these methods. If we combine both these frameworks, we can deduce a fast, efficient deep learning-based method to object detection.

Facial Landmark detection for Emotions:

Using dlib and OpenCV, we can detect facial features in an image. Facial features represent the most important area of the face like

  • Eyes
  • Eyebrows
  • Forehead
  • Nose
  • Mouth
  • Jawline

Extracting these facial features have been applied to different applications like face swapping, Face app and many more. Basically, identifying the facial features is a subset of points marked on the face in order to draw out the geometric facial features. From the given input image, the dlib package is used to map the facial points on the person’s face using (x,y) co-ordinates.

Our main goal in the project is to detect the major facial structures and emotions using shape prediction analysis. There are two steps involved in this process.

  • Step #1 is to detect the location and size of the face in the image/video.
  • Step #2 is to detecting the key facial structures on the face region.

The first step can be achieved in a couple of ways. Haar cascades is one of them. We might apply a pre-trained Histogram Oriented Gradients along with linear SVM object detector for the purpose of face detection. We can also use deep learning-based algorithms for localization. The important part is that the method we apply to obtain the face bounding box (i.e., the (x, y)-coordinates of the face in the image) will not really matter.

After detecting the facial region to use for emotion recognition, we can proceed to step #2. There are a number of facial landmark detectors, but all methods try to localize the features mainly with the following facial regions:

  • Mouth
  • Left eyebrow
  • Right eyebrow
  • Left eye
  • Right eye
  • Jaw
  • Nose

An number of regression trees will be trained to estimate the facial indicator positions directly from the pixel intensities. The end result is a facial indicator detector that can be used to detect facial landmarks in real-time with high quality.

Data Set:

Direct video feed from a camera source.

Input images: https://www.kaggle.com/c/emotion-detection-from-facial-expressions/data

 

Timelines:

Time Line (2019-2020)

# Weeks

Task

Oct 11th to Oct 25th, 2019

2 weeks

Data set analysis. Gathering of images and video inputs for use in project inputs and training our models.

Oct 26th to November 22nd, 2019

4 weeks

Image pre-processing and coding our jupyter notebook for input models and training.

November 24th,2019 to January 19th, 2020

9 weeks

Testing various algorithms for prediction and face detection accuracy and emotion detection. Analysis of various test data results.

January 20th to February 7th, 2020

3 weeks

Report Writing

February 10th to March 10th, 2020

3 weeks

Submit Report And approvals

 

References:

[1] Hyung-Ji Lee , Wan-Su Lee, Jae-Ho Chung , “Face recognition using Fisherface algorithm and elastic graph matching”, IEEE International Conference on Image Processing ,Vol.1,pp: 998- 1001, October 2001.

[2] Adrian Dinculescu,Cristian Vizitiu,Alexandru Nistorescu,Mihaela Marin,Andreea Vizitiu, “Novel Approach to Face Expression Analysis in Determining Emotional Valence and Intensity with Benefit for Human Space Flight Studies”, 5th IEEE International Conference on E-Health and Bioengineering – EHB, pp:1 – 4, November 2015.

[3] Rajesh K M, Naveenkumar M, “An Adaptive-Profile Modified Active Shape Model for Automatic Landmark Annotation Using Open CV”, International Journal of Engineering Research in Electronic and Communication Engineering (IJERECE), Vol.3, Issue.5, pp:18-21, May 2016.

[4] Samiksha Agrawal,Pallavi Khatri, “Facial Expression Detection Techniques: Based on Viola and Jones algorithm and Principal Component Analysis”, Fifth International Conference on Advanced Computing Communication Technologies, pp:108-112, February 2015.

[5] Przybyło J., Automatyczne rozpoznawanie elementów mimiki twarzy w obrazie i analiza ich przydatności do sterowania, rozprawa doktorska, Akademia Górniczo-Hutnicza, Kraków, 2008.

[6] Ratliff M. S., Patterson E., Emotion recognition using facial expressions with active appearance models, Proceedings of the Third IASTED International Conference on Human Computer Interaction, ACTA Press, Anaheim, CA, USA, 2008, 138–143

[7] MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications Andrew G. HowardMenglong ZhuBo ChenDmitry KalenichenkoWeijun WangTobias WeyandMarco AndreettoHartwig Adam

[8] https://github.com/weiliu89/caffe/tree/ssd

[9] https://github.com/Zehaos/MobileNet

[10] http://cocodataset.org/#home

 

Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

DMCA / Removal Request

If you are the original writer of this essay and no longer wish to have your work published on UKEssays.com then please: