discrib

所属分类:matlab编程
开发工具:matlab
文件大小:18167KB
下载次数:7
上传日期:2018-08-30 00:19:37
上 传 者habib1418
说明:  Extreme learning machine (ELM) has become a popular topic in machine learning in recent years. ELM is a new kind of single-hidden layer feedforward neural network with an extremely low computational cost. ELM, however, has two evident drawbacks

文件列表:
discrib\abstract.docx (12985, 2017-10-05)
discrib\p1326.avi (8801334, 2017-10-05)
discrib\pro1326.pdf (2112676, 2017-10-01)
discrib\pro1326.zip (9606554, 2018-08-12)
discrib\SBELM Classification\BELM_Classification.m (4276, 2017-10-05)
discrib\SBELM Classification\BELM_PosteriorMode.m (2947, 2017-10-05)
discrib\SBELM Classification\data\Iris.tr (2858, 2013-01-22)
discrib\SBELM Classification\Example.m (2271, 2017-10-05)
discrib\SBELM Classification\PreProcessData.m (969, 2014-04-29)
discrib\SBELM Classification\Sbelm_Classify.m (4302, 2017-10-05)
discrib\SBELM Classification\Sbelm_HiddenActivation.m (842, 2013-10-21)
discrib\SBELM Classification\Sbelm_Hiddenoutput.m (541, 2013-10-25)
discrib\SBELM Classification\Sbelm_multiclass_probability.m (1940, 2017-10-05)
discrib\SBELM Classification\Sbelm_Predict.m (3161, 2017-10-05)
discrib\SBELM Classification\Sbelm_train_cv.m (7672, 2017-10-05)
discrib\SBELM Classification\SB_Estimation.m (2093, 2017-10-05)
discrib\SBELM Classification\SB_PosteriorMode.m (3810, 2017-10-05)
discrib\Thumbs.db (13312, 2018-08-24)
discrib\SBELM Classification\data (0, 2014-04-29)
discrib\SBELM Classification (0, 2017-10-05)
discrib (0, 2018-08-22)

The SBELM toolbox was developed by University of Macau for academic use. Any academic use of this toolbox should cite the paper: "Sparse Bayesian Extreme Learning Machine for Multi-classification" by Jiahua Luo, Chi-man Vong and Pak-in Wong. For any bugs of the toolbox, please send to mb15457@umac.mo. decriptions of main function: 1. Example(classifier) % This is an example to show you how to access to SBELM. % The format of training data set: the first column is labels numerically ranging from % 1 to number of categories. Other columns are attributes of pattern % pls refer to the exampl data file Balance.tr, which can be opened by notebook txt. % users might first begin with [...]=Sbelm_train_cv(...) to find parameters % by cross validation or directly call [...]=Sbelm_classify(...) % then with [...]=Sbelm_predict(...) to retrain and get the test % accuracy. 2. [cvAccu,Accu_std,properHiddenNeuronsNum,ProperActives_hnum,bestSeed,label,number_class]=Sbelm_train_cv(train_data, n_folds, LowerHidden_N, HiddenStep, UpperNeuron_N,LowerSeed,UpperSeed,Activation, classifier) % this function conducts the cross validation. users may begin with this functino to find optimal parameters for further training. or igore it, just use Sbelm_classify(...) % Input arguments: % Train_data - Training data. format: label: attribute1 :attribute2:.... % n_folds - n-fold cross validation % LowerHidden_N [optional]: - the lower number of hidden neurons for % selecting best N % UpperNeuron_N [optional]: - same as LowerHidden_N % HiddenStep [optional] : LowerHidden_N: HiddenStep:UpperNeuron_N % LowerSeed:UpperSeed [optional]: -select best seed within [LowerSeed,UpperSeed] with step=1 for generating uniformly % distributed synapses input-to-hidden layer % Activation[optional]: -the activation function of hidden layer % classifier[optional]: -the classifier to be trained %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Output arguments: % cvAccu -the best validation accuracy % Accu_std -the standar deviation of n_folds validation accuracies % properHiddenNeuronsNum - the determined number of hidden neurons % bestSeed -the determined seed by cv % label -C-by-1 matrix, containing the label of each % category % number_class -number of categories %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % As SBELM tends to be insensitive to number of hidden neurons and achieve % best performance at small number,therefore, users can narrow the gap of % [LowerHidden_N,UpperNeuron_N] to accerlate the cv process. Also, the seed for generating different % inout-to-hidden layer weights may have -2% to 2% impact on the accuracy. % users can further narrow the gap trial of seed. e.g[1:1:5]. or keep it % fixed. 3. [Model,ActiveNodes] = Sbelm_Classify(X_train, T_train, label, number_class, NumberofHiddenNeurons,Activation,seed) % user may directly begin with this function to train sbelm classifier with their own parameter selection strategy % Input argumenets: % X_train : - N-by-M matrix of training data. % N is the number of training % data. M: the dimension of attribute. % T_train : -observed outputs, recommending labeling as % [0,1,2,...C,] % label : -labels of all categories. % NumberofHiddenNeurons : - number of hidden neurons % Activation [optional] : - the activation of hidden layer % seed [optional] : - seed for generating uniformly distributed % synapses for input-to-hidden layer. % different seed may have (+ -)1~2% impact on the testing accuracy. % Output arguments: % Model : - the structure for saving models. contains C*(C-1)/2 % classifiers % ActiveNodes: - the mean of active hidden nodes over all % classifiers 4. function [Prob,TestingAccuracies]=Sbelm_Predict(Xtest,ttest,label, allmodel,number_class,classifier) % this function predicts the outputs of testing dataset. % Input arguments: % Xtest: -A N-by-M matrix of testing dataset. N is the number of instances. M: the dimension of each attribute or the number of input nodes for SBELM % ttest: - the corresponding labels of Xtest. % label: -the set of labels (C-by-1) of each category. % allmode: -the C*(C-1)/2 classifiers trained by pariwise SBELM % number_class: -the number of categories % classifier: the SBELM model or BELM. Default value denotes SBELM %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % output arguments: % Prob: -A N-by-C matrix, the predicted probabilities of the testing dataset % TestingAccuracies: -the accuracy of testing dataset. %For detailed analysis, pls refer to paper % "Sparse Bayesian Extreme Learning Machine for Multi-classification" by % Jiahua Luo, Chi-man Vong. 5. [Model,ActiveNodes] = BELM_Classification(X_train, T_train, label, number_class, NumberofHiddenNeurons,Activation,seed) we have also implemented the non-sparse bayesian extreme learning machine for multiclassification(BELM).The difference between the SBELM and BELM model is that SBELM adopts indepandent ARD priors for the distribution of output weights while BELM just adopts one prior governing all weights distribution. For details, please refer to our paper: Sparse Bayesian Extreme Learning Machine by Jiahua Luo and Chi-Man The calling of this function is the same as Sbelm_classify. The prediction of BELM can directly call the Sbelm_predict(...)

近期下载者

相关文件


收藏者