MFCC+HMM

所属分类:模式识别(视觉/语音等)
开发工具:matlab
文件大小:16638KB
下载次数:5
上传日期:2020-03-30 09:41:26
上 传 者天天向上学程序
说明:  利用MFCC特征,采用HMM模式,实现中文数据库的情感识别。
(Using MFCC features and HMM model, emotion recognition of Chinese database is realized.)

文件列表:
MFCC+HMM\HMM\COPYING (1327, 2013-09-16)
MFCC+HMM\HMM\dr_wav2mfcc_e_d_a.m (2019, 2018-11-08)
MFCC+HMM\HMM\EMAdataabstract.m (392, 2018-11-07)
MFCC+HMM\HMM\EM_hmm_skips_1gau.m (4230, 2018-10-30)
MFCC+HMM\HMM\forward_backward_hmm_skips_1gau_log_math.m (2069, 2013-07-11)
MFCC+HMM\HMM\forward_hmm_skips_1gau_log_math.m (1723, 2018-10-30)
MFCC+HMM\HMM\fwav2mfcc_e_d_a.m (1946, 2018-11-08)
MFCC+HMM\HMM\generate_htk_filelist_txt.m (850, 2011-10-02)
MFCC+HMM\HMM\generate_htk_word_trans_mlf.m (455, 2018-10-30)
MFCC+HMM\HMM\generate_LR_HMM_skips_HTK_structure.m (1739, 2013-07-11)
MFCC+HMM\HMM\generate_LR_HMM_skips_structure.m (1883, 2018-11-13)
MFCC+HMM\HMM\generate_selected_TI_isolated_digits_testing_list_mat.m (1422, 2018-11-07)
MFCC+HMM\HMM\generate_selected_TI_isolated_digits_training_list_mat.m (1396, 2018-11-07)
MFCC+HMM\HMM\generate_testing_list_mat.m (330, 2013-07-11)
MFCC+HMM\HMM\generate_training_list_mat.m (331, 2013-07-11)
MFCC+HMM\HMM\global_mean_var_for_hmm_skips_1gau.asv (2397, 2018-11-14)
MFCC+HMM\HMM\global_mean_var_for_hmm_skips_1gau.m (2397, 2018-11-14)
MFCC+HMM\HMM\hmm_with_skips.mat (546, 2018-10-30)
MFCC+HMM\HMM\logDiagGaussian.m (141, 2011-10-06)
MFCC+HMM\HMM\logSum.m (472, 2018-10-30)
MFCC+HMM\HMM\main_dr_wav2mfcc_e_d_a.m (525, 2018-11-08)
MFCC+HMM\HMM\main_test_EM.m (891, 2013-09-21)
MFCC+HMM\HMM\main_train_EM.m (2132, 2018-04-28)
MFCC+HMM\HMM\main_train_test_EM.asv (3518, 2018-11-14)
MFCC+HMM\HMM\main_train_test_EM.m (3496, 2018-11-13)
MFCC+HMM\HMM\main_train_test_EM_init_zero_mean.m (2819, 2013-09-21)
MFCC+HMM\HMM\main_train_test_EM_with_skips.m (2726, 2013-09-21)
MFCC+HMM\HMM\mfcc_e_d_a\a11-wyn.mfc (12804, 2018-11-11)
MFCC+HMM\HMM\mfcc_e_d_a\a21-cjq.mfc (6408, 2018-11-11)
MFCC+HMM\HMM\mfcc_e_d_a\a21-scy.mfc (7500, 2018-11-11)
MFCC+HMM\HMM\mfcc_e_d_a\a21-xpy.mfc (8592, 2018-11-11)
MFCC+HMM\HMM\mfcc_e_d_a\a22-scy.mfc (6876, 2018-11-11)
MFCC+HMM\HMM\mfcc_e_d_a\a22-wyn.mfc (11868, 2018-11-11)
MFCC+HMM\HMM\mfcc_e_d_a\a22-zl.mfc (6252, 2018-11-11)
MFCC+HMM\HMM\mfcc_e_d_a\a23-zl.mfc (6252, 2018-11-11)
MFCC+HMM\HMM\mfcc_e_d_a\a31-xpy.mfc (12492, 2018-11-11)
MFCC+HMM\HMM\mfcc_e_d_a\a32-cjq.mfc (12024, 2018-11-11)
MFCC+HMM\HMM\mfcc_e_d_a\a32-scy.mfc (10620, 2018-11-11)
MFCC+HMM\HMM\mfcc_e_d_a\a32-wyn.mfc (11400, 2018-11-11)
MFCC+HMM\HMM\mfcc_e_d_a\a32_ld2.mfc (12024, 2018-11-11)
... ...

This readme document is for version 1.03. The EM training function is updated in this version. Those who are interested in a more easily used version are invited to download version 1.01, in which the structure of HMMs is left-to-right without skips. Those who are interested in high-order hidden Markov models (HO-HMM) or hidden semi-Markov models (HSMM) are invited to visit https://sourceforge.net/projects/ho-hmm/. In this version, the HMMs are allowed to have state-skipping transitions. State 1 and State N in this version are the null start and end state, respectively. The entry point for this package is "main_train_test_EM.m". In that script file, you may need to modify several parameters for the recognition system such as MODEL_NO, dim(the dimension of feature vector), ITERATION_END (which is used to determine the number of training iterations), the range for EMIT_STATE_NO, and the model structure, which is defined by the initialization probabilities, A0, Aij, and Af. A0 is a row vector for the transition probability from the dummy start state to the emitting states, i.e., A0(k) is used to initialize A(1,k+1) Aij is a row vector for the transition probability from an emit-state to itself and to the following states, i.e., Aij(k) is used initialize A(i,i+k-1) for all i. Af is a row vector used to set the transition probability from the last k-th emit-state to the null end state. For each k, if Af(k) is larger than A(N-k,N), then Af(k) is used to replace A(N-k,N) and the probability associated with the transition arcs leaving State k are renormalized. If Af(k) does not exists or Af(k) is not larger than A(N-k,N), then A(N-k,N) will not been affected. Before you start to use the programs, you should first prepare the training and testing data. Excerpts of TIDIGITS database can be obtained from http://cronos.rutgers.edu/~lrr/speech%20recognition%20course/databases/isolated_digits_ti_train_endpt.zip and http://cronos.rutgers.edu/~lrr/speech%20recognition%20course/databases/isolated_digits_ti_test_endpt.zip. The root directory for the training data, isolated_digits_ti_train_endpt, and the root directory for test data, isolated_digits_ti_test_endpt, should be placed under the "wav" directory so that we do not need to modify "main_train_test_EM.m" to run that program. To prepare your own data, you can modify the Matlab script file "main_dr_wav2mfcc_e_d_a.m" for extracting the feature vector sequence from your own waveform data. You also need to create a .mat file containing a list of training data and another .mat file containing a list of testing data, where the first field of a record in the list represents the word id (in integer) and the second field is the path of the data file. Example Matlab script files for creating training and testing list files are "generate_selected_TI_isolated_digits_training_list_mat.m" and "generate_selected_TI_isolated_digits_testing_list_mat.m", respectively. The feature file format used in this version is compactable with the HTK format.

近期下载者

相关文件


收藏者