SSVMtoolbox

所属分类:matlab编程
开发工具:matlab
文件大小:202KB
下载次数:25
上传日期:2011-06-12 16:33:39
上 传 者porschee
说明:  平滑支持向量机 SSVM以及SSVR 的源代码
(smooth support vector regression )

文件列表:
hibiscus.m (14568, 2006-08-31)
Housing_dataset.txt (48034, 2006-08-22)
Ionosphere_dataset.mat (96424, 2006-08-23)
K_ssvm_predict.m (2652, 2006-08-30)
K_ssvm_train.m (6818, 2006-08-30)
srsplit.m (2990, 2006-08-25)
SSVM_M.m (5428, 2007-04-03)
ssvm_predict.m (2220, 2006-08-30)
ssvm_train.m (5127, 2006-08-30)
SSVR_M.m (5176, 2006-08-30)
SVKernel_C.m (3926, 2005-01-04)
SVKernel_EX.dll (1310720, 2004-12-10)
SVKernel_M.m (4851, 2004-09-23)

============================================================= - SSVM Toolbox - Last Update:2006/8/30 - For questions or comments, please email Yuh-Jye Lee, yuh-jye@mail.ntust.edu.tw - Web site: http://dmlab1.csie.ntust.edu.tw/downloads/ ============================================================= Table of Contents ================= - Introduction - Key Features - Data Format For classification For regression - Code Usage with Examples ssvm_train ssvm_predict hibiscus (automatic model selection, supports SSVM, SSVR) - Training and Testing Procedures with Examples Classification procedure Regression procedure - License Introduction ============ SSVM toolbox is an implementation of Smooth Support Vector Machine in Matlab. SSVM is a reformulation of conventional SVM and can be solved by a fast Newton-Armijo algorithm. Besides, choosing a good parameter setting for a better performance in a learning task is an important issue. We also provide an automatic model selection tool to help users to get a good parameter setting. SSVM toolbox now includes smooth support vector machine for classification, epsilon-insensitive smooth support vector regression and an automatic model selection tool using uniform design. Key Features ============ * Solve classification and regression problems. * Support linear, polynomial and radial basis kernels. * Provide an automatic model selection for SSVM and SSVR with RBF * Can handle large scale problems by using reduced kernel (RSVM). * Provide cross validation evaluation. * Provide an alternative initial point other than zeros using regularized least squares. Data Format =========== SSVM toolbox is implemented in Matlab. Use a data format which can be loaded into Matlab. The instances are represented by a matrix (rows for instances and columns for variables) and the labels (1 or -1) or responses are represented by a column vector. For classification ------------------- instances | 10 -5 0.8 | => inst 1 | 15 -6 0.2 | => inst 2 | . | . | . | . | . | . | 21 1 -0.1 | => inst n label | 1 | => label of inst 1 | 1 | => label of inst 2 | . | . | . | . | . | . | -1 | => label of inst n For regression --------------- instances | 11 -5 0.2 | => inst 1 | 14 -7 0.8 | => inst 2 | . | . | . | . | . | . | 20 2 -0.9 | => inst n label | 3.2 | => response of inst 1 | 1.7 | => response of inst 2 | . | . | . | . | . | . | -1.1 | => response of inst n Code Usage with Examples ======================== SSVM toolbox contains three main functions:ssvm_train for SVMs training, ssvm_predict for SVMs prediction and hibiscus for automatic model selection. Usage of ssvm_train: >>model = ssvm_train(label, inst, 'options' ) --------------------------------------------------------------- *Inputs of ssvm_train: label :training data class label or response inst :training data inputs options: -s learning algorithm (default:0) 0-SSVM 1-SSVR -t kernel type (default:2) 0-linear 1-polynomial 2-radial basis (Gaussian kernel) -c the weight parameter C of SVM (default:100) -e epsilon-insensitive value in epsilon-SVR (default:0.1) -g gamma in kernel function (default:0.1) -d degree of polynomial kernel (default:2) -b constant term of polynomial kernel (default:0) -m scalar factor of polynomial kernel (default:1) -r ratio of random subset size to the full data size (default:1) -i alternatives for initial point (default: 0) 0-using zero vector (w = 0, b = 0) 1-using an initial point obtained from regularized least squares -v number of cross-validation folds (default:0) *Output of ssvm_train: model learning model (a structure in Matlab) .w normal vector of separating (or response) hyperplane .b bias term .RS reduced set .Err error rate (a structure in Matlab) .Training training error .Validation validation error .Final final model error in classification, it returns the error rate in regression, it returns the relative 2-norm error and the mean absolute error .params parameters specified by the user in the inputs Example: Classification using a Gaussian kernel >>model_one = ssvm_train(label, inst, '-s 0 -t 2 -c 23.71 -g 0.0625') Example: Classification using a polynomial kernel and an initial point by regularized least squares >>model_two = ssvm_train(label, inst, '-t 1 -c 23.71 -d 2 -m 3 -b 2 -i 1') Example: Classification using a Gaussian kernel and a 10% reduced set >>model_three = ssvm_train(label, inst, '-t 2 -c 421 -g 0.31 -r 0.1') Example: Classification using a Gaussian kernel in a 5-fold cross validation >>model_four = ssvm_train(label, inst, '-t 2 -c 23.71 -g 0.0625 -v 5') Example: Regression using a Gaussian kernel >>model_five = ssvm_train(label, inst, '-s 1 -t 2 -c 10000 -g 0.71') Usage of ssvm_predict: >>[PredictedLabel, ErrRate ]= ssvm_predict(label, inst, model) ----------------------------------------------------------------------------------- *Input of ssvm_predict: label:testing data class label or response inst :testing data inputs model:model learned from ssvm_train or user-specified *Output of ssvm_predict: PredictedLabel:predicted label or response ErrRate :error rate (for classification) or relative 2-norm error (for regression) Example: prediction using model_one (classification problem) >>[PredictedLabel, ErrRate]=ssvm_predict(label, inst, model_one) Example: prediction using model_one without testing labels (classification problem) >>[PredictedLabel, ErrRate]=ssvm_predict(zeros(NumOfInst,1), inst, model_one) Note: In this case, the error rate is meanless. Example: prediction using model_five (regression problem) >>[PredictedLabel, ErrRate]=ssvm_predict(label, inst, model_five) Usage of hibiscus: >>Result = hibiscus(label, inst, 'method', 'type', 'command') ------------------------------------------------------------------------------ Note:only for Gaussian kernel *Input of hibiscus: label :training data class label or response inst :training data inputs method :learning algorithm (default:'SSVM') It must be {'SSVM', 'SSVR'} (case insensitive) type :model selection methods (default:'UD') It must be {'UD', 'GRID'} (case insensitive) command:optional; -v number of cross-validation folds (default:5) -r ratio of random subset size to full data size (default:1) *Output of hibiscus: Result :include all returned information (a structure in Matlab) .TErr :Training Error .VErr :Validation Error .Best_C :The best C in our model selection method .Best_Gamma :The best gamma in our model selection method .Elapse :CPU time in seconds .Points :Trying points .Ratio :Ratio of random subset size to full data size Example: determine C and Gamma by a 5-fold cross-validation and the full kernel for classification >>Result = hibiscus(label, inst, 'SSVM', 'UD', '-v 5 -r 1') Example: determine C and Gamma by a 10-fold cross-validation and a 10% reduced kernel for classification >>Result = hibiscus(label, inst, 'SSVM', 'UD', '-v 10 -r 0.1') Example: determine C and Gamma by a 10-fold cross-validation and a 10% reduced kernel for regression >>Result = hibiscus(label, inst, 'SSVR', 'UD', '-v 10 -r 0.1') Training and Testing Procedures with Examples ============================================= Classification procedure ------------------------- *Change your current directory to SSVMtoolbox folder *Load dataset Ionosphere_dataset.mat (can be found in SSVM toolbox) >>load Ionosphere_dataset.mat *Determine good C and Gamma via a 5-fold cross validation using full kernel >>Result = hibiscus(label, inst, 'SSVM', 'UD', '-v 5 -r 1'); *Read the contents of Result (Best_C, Best_Gamma, ...etc) >>Result >>Result.Best_C *Train a model by using Best_C and Best_Gamma in Result >>model = ssvm_train(label, inst, '-s 0 -t 2 -c 10 -g 0.0856'); *Read the contents of model (w, b, parameters, ...etc) >>model >>model.w *Prediction using the model obtained from the training process (here, we use the training set as the testing set) >>[PredictedLabel, ErrRate]=ssvm_predict(label, inst, model); Regression procedure --------------------- *Change your current directory to SSVMtoolbox folder *Load dataset housing_dataset.txt (can be found in SSVM toolbox) >>load housing_dataset.txt *Split the housing data into inst and label >>inst = housing_dataset(:,1:13); >>label = housing_dataset(:,14); *Determine good C and Gamma via a 5-fold cross validation and a 10% reduced kernel >>Result = hibiscus(label, inst, 'SSVR', 'UD', '-v 5 -r 0.1'); *Read the contents of Result (Best_C, Best_Gamma, ...etc) >>Result >>Result.Best_C *Train a model by using Best_C and Best_Gamma in Result >>model=ssvm_train(label, inst, ['-s 1 -c ',num2str(Result.Best_C),' -g ',num2str(Result.Best_Gamma),' -r 0.1']); *Read the contents of model (w, b, parameters, ...etc) >>model >>model.w *Prediction using the model obtained from the training process (here, we use the training set as the testing set) >>[PredictedLabel, ErrRate]=ssvm_predict(label, inst, model); *Read the values of relative 2-norm error and mean absolute error >>ErrRate(1) >>ErrRate(2) * * Ionosphere_dataset and housing_dataset are from UCI * License ======= This software is available for non-commercial use only. The authors are not responsible for implications from the use of this software.

近期下载者

相关文件


收藏者