adaboost

所属分类:其他
开发工具:matlab
文件大小:7188KB
下载次数:2776
上传日期:2005-04-22 12:06:19
上 传 者管理员
说明:  一个关于adaboost算法的matlab程序,这是我开始接触adaboost研究的程序,有实用价值
(this is a adaboost algorithm write with matlab code,and it is my starting program of researching work on adaboost,it is valuable for application )

文件列表:
adaboost\adaboost_test.m (329, 2004-12-04)
adaboost\adaboost_train.m (4376, 2004-12-05)
adaboost\eva_stump.m (725, 2004-12-05)
adaboost\debug_test.m (972, 2004-12-04)
adaboost\part3_data.mat (196784, 2004-12-04)
adaboost\data_partition.m (926, 2004-12-05)
adaboost\main.m (1697, 2004-12-14)
adaboost\adaboost and rbf\abr_v1\COPYRIGHTS.TXT (3411, 2004-10-11)
adaboost\adaboost and rbf\abr_v1\sample_adaboost_reg.m (2167, 1999-12-11)
adaboost\adaboost and rbf\abr_v1\sample_rbf_classif.m (1881, 1999-12-10)
adaboost\adaboost and rbf\abr_v1\sample_rbf_regr.m (1853, 1999-12-10)
adaboost\adaboost and rbf\abr_v1\mytest.m (1145, 2005-03-11)
adaboost\adaboost and rbf\abr_v1\@rbf_net_w\calc_output.m (1217, 1999-12-10)
adaboost\adaboost and rbf\abr_v1\@rbf_net_w\calc_weights.m (848, 1999-12-10)
adaboost\adaboost and rbf\abr_v1\@rbf_net_w\cluster.m (540, 1999-12-10)
adaboost\adaboost and rbf\abr_v1\@rbf_net_w\display.m (548, 1999-12-10)
adaboost\adaboost and rbf\abr_v1\@rbf_net_w\do_learn.m (1099, 1999-12-10)
adaboost\adaboost and rbf\abr_v1\@rbf_net_w\get_C.m (410, 1999-12-10)
adaboost\adaboost and rbf\abr_v1\@rbf_net_w\get_lambda.m (394, 1999-12-10)
adaboost\adaboost and rbf\abr_v1\@rbf_net_w\get_max_iter.m (390, 1999-12-10)
adaboost\adaboost and rbf\abr_v1\@rbf_net_w\get_num_cen.m (434, 1999-12-10)
adaboost\adaboost and rbf\abr_v1\@rbf_net_w\get_output.m (1028, 1999-12-10)
adaboost\adaboost and rbf\abr_v1\@rbf_net_w\get_param.m (391, 1999-12-10)
adaboost\adaboost and rbf\abr_v1\@rbf_net_w\get_R.m (364, 1999-12-10)
adaboost\adaboost and rbf\abr_v1\@rbf_net_w\get_test.m (598, 1999-12-10)
adaboost\adaboost and rbf\abr_v1\@rbf_net_w\get_test_size.m (415, 1999-12-10)
adaboost\adaboost and rbf\abr_v1\@rbf_net_w\get_train.m (611, 1999-12-10)
adaboost\adaboost and rbf\abr_v1\@rbf_net_w\get_train_size.m (418, 1999-12-10)
adaboost\adaboost and rbf\abr_v1\@rbf_net_w\get_val.m (585, 1999-12-10)
adaboost\adaboost and rbf\abr_v1\@rbf_net_w\get_val_size.m (412, 1999-12-10)
adaboost\adaboost and rbf\abr_v1\@rbf_net_w\get_w.m (410, 1999-12-10)
adaboost\adaboost and rbf\abr_v1\@rbf_net_w\rbf_net_w.m (1625, 1999-12-10)
adaboost\adaboost and rbf\abr_v1\@rbf_net_w\set_C.m (602, 1999-12-10)
adaboost\adaboost and rbf\abr_v1\@rbf_net_w\set_ftol.m (452, 1999-12-10)
adaboost\adaboost and rbf\abr_v1\@rbf_net_w\set_lambda.m (452, 1999-12-10)
adaboost\adaboost and rbf\abr_v1\@rbf_net_w\set_max_iter.m (378, 1999-12-10)
adaboost\adaboost and rbf\abr_v1\@rbf_net_w\set_param.m (395, 1999-12-10)
adaboost\adaboost and rbf\abr_v1\@rbf_net_w\set_R.m (485, 1999-12-10)
adaboost\adaboost and rbf\abr_v1\@rbf_net_w\set_w.m (372, 1999-12-10)
... ...

Part 1: For each weak learner(tree stump), pick the best feature and best threshold(C), which is better than building tree stump for each feature, and making decision on voting across all the features. I.e., just one tree stump for each weak learner. Part 2: I=ceil(N * rand([N,1])) can obtain the bootstrap sample indices for N training samples. Part 3: Randomly using 90% data for training, 10% data for evaluation, randomly flip 5% training data into the alternate class label. compute the errors on the test set using the classifiers trained on the clean/noisy training data. The maximum tree stumps are 50. Do this 50 times, average the errors. DATA Error(%,clean) Error(%,noise) Increase(%) Ionosphere 17.3 17.9 0.6 Sonar 19.1 22.8 3.7 When using the bootstrap samples(stochastic adaboost): DATA Error(%,clean) Error(%,noise) Increase(%) Ionosphere 15.6 17.0 1.4 Sonar 20.3 22.4 2.1 ####################################################################################### main.m: start point. adaboost_train.m: adaboost training(with and without bootstrap sampling) adaboost_test.m: make the final decision on voting on the M tree stumps eva_stump.m: subroutine for evaluating a tree stump with one sample data_partition.m: partition the original data set into training and test sets debug_test.m: used for debuging only.

近期下载者

相关文件


收藏者