IJATCA solicits original research papers for the January – 2025 Edition.
Last date of manuscript submission is January 30, 2025.
Feelings assume a greatly imperative part in human mental life. It is a medium of articulation of one\"s point of view or his mental state to others. It is a channel of human mental portrayal of one\"s emotions. Feelings are a key grammatical form. Naturally recognizing feeling in a recording can improve human PC communication. It likewise empowers different sorts of investigations, for example, scan for paralinguistic phenomena, the genuineness of the speaker, and so on. In this proposition, we are going to utilize neural system, SVM and HMM strategy on discourse that is concentrate by highlight extraction technique. The characterization execution is taking into account separated highlights utilizing different classifiers. There may be sort of feelings amid discourse like happy, sad, aggressive and fear..In this work these four feelings are going to recognize. On premise of these feelings at long last we attain to on a conclusion with exactness results. The entire recreation has been occurred in MATLAB environment.
L. Zhao, X. Qian, C. Zhou, and Z. Wu, “Study on emotional feature derived from speech signal,” Journal of Data Acquistion& Processing, vol. 15, no. 1, pp. 120–123, 2000
X. Bo, “Analysis of mandarin emotional speech database and statistical prosodic features,” inProceedings of the Interntional Conference on Affective Computing and Intelligent Interaction (ACLL
T. L. Nwe, S. W. Foo, and L. C. de Silva, “Speech emotion recognition using hidden Markov models,”Speech Communication, vol. 41, no. 4, pp. 603–623, 2003.
Z. Li, “A study on emotional feature analysis and recognition in speech signal,” Journal of China Institute of Communications, vol. 21, no. 10, pp. 18–24, 2000.
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Proceedings of the 26th Annual Conference on Neural Information Processing Systems (NIPS
YashpalsingChavhan, M. L. Dhore, PallaviYesaware, \"Speech Emotion Recognition Using Support Vector Machine\", International Journal of Computer Applications, vol. 1, pp.6-9,February 2010.
B. Schuller, G. Rigoll, M. Lang, \"Hidden Markov model-based speech emotion recognition\", Proceedings of the IEEE ICASSP Conference on Acoustics, Speech and Signal Processing, vol.2, pp. 1- 4, April 2003.
Xia Mao, Lijiang Chen, LiqinFu, \"Multi-level Speech Emotion Recognition Based on HMM and ANN\", 2009 WRI World Congress, Computer Science and Information Engineering, pp.225-229, March 2009.
T.-L. Pao, Y.-T.Chen, J.-H.Yeh, P.-J. Li, \"Mandarin emotional speech recognition based on SVM and NN\", Proceedings of the 18thInternational Conference on Pattern Recognition (ICPR
Xiao, Z., E. Dellandrea, Dou W.,Chen L., \"Features extraction and selection for emotional speech classification\". 2005 IEEE Conference on Advanced Video and Signal Based Surveillance (AVSS), pp.411- 416, Sept 2005.
http://www.expressive-speech.net/. Berlin emotional speech database
D. Ververidis, C. Kotropoulos, and I. Pitas, \"Automatic emotional speech classification\", in Proc. 2004 IEEE Int. Conf. Acoustics,Speech and Signal Processing, vol. 1, pp. 593-596, Montreal, May2004.
BPNN, Classification, Emotion Recognition, Speech, SVM, HMM.
IJATCA is fuelled by a highly dispersed and geographically separated team of dynamic volunteers. IJATCA calls volunteers interested to contribute towards the scientific development in the field of Computer Science.