Emotion Recognition

Find the emotions from selected features of voice and facial expressions

The main objective of the development phase is to build a set of models based on data corpus and compare those models for the best fits results. Our tool detects emotional states via main three factors:

Texture information of speech

Extracted features from speech

Facial expression which is made by speakers

 


How It Works

The client (patient or consumer) needs to connect to our services and provide  voice and facial data  through our software application. Our AI tool running on the central servers analyses the data and provides quick evaluation of the current (and historical, in case of repeat clients) emotional status of the client.

Team building

We have the team of scientists, ICT professionals, and Project Managers necessary to complete this project

Software Development

Next step will be development of the specialised software to work for diverse range of age groups, ethnicities, and genders

Extensive Testing

Next step will be development of the specialised software to work for diverse range of age groups, ethnicities, and genders

Registration with Authorities

The software and device will be registered for regular use of the public in Health Industry.

Following stages will be included to complete the final product.

Working Methodolgy

Analyse the previous researches related to emotion recognition from speech in robot agents completely to overcome the existing problems.

Authentic data sets are being used for product development. Further data will be collected directly to ensure precision of the outcome.

Emotional speech is divided into continuous audio and video signals. Feature from each utterance are calculated and finally we select the model and develop a set of evaluation metrics.

Finally, we implement our algorithm, analyse the data, and verify the authenticity of outcome through unit tests, integration tests, end to end tests, and performance testing.