kenixibaozidongji

所属分类:matlab编程
开发工具:matlab
文件大小:3KB
下载次数:19
上传日期:2011-04-29 20:42:27
上 传 者point_net
说明:  可逆细胞自动机matlab源代码,希望对大家有帮助
(Reversible cellular automata matlab source code, we want to help)

文件列表:
RCA.m (2677, 2005-02-23)

This ReadMe contains a short description of the RCA matlab function. The code is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. The RCA function takes as argument a data set and a set of positive constraints and returns a linear transformation of the data space into better representation (or, alternatively, a Mahalanobis metric over the data space). The new representation is known to be optimal in an information theoretic sense under a constraint of keeping equivalent data points close to each other. The function can be used as follows : [ B, A, newData ] = RCA( data, chunks ) The function parameters are data - The data set. Every row is a feature vector of a data instance. data(i,:) contains data instance 'i'. chunks - Positive constraints vector (constraints of the form 'points a and b belong to the same class'). It is an integer tag vector with dimensions 1*length(data). Entrance i of the vector relates to data instance i. When several data points are known to be from the same class, their indexes in this vector should be identical positive integers to indicate the constraint. A '-1' tag indicates that the data instance isn't positively constrained. For example: chunks=[2 -1 2 1 -1 -1 2 ] indicates that Data instances 1,3 and 7 are from the same class. Data instances 4 and 6 are from the same class. Data instances 2 and 5 are not constrained. Remember that the sets of instances {1,3,7} and {4,6} might belong to the same class and might belong to different classes. useD - Optional parameter. When not given, RCA is done in the original dimension without dimensionality reduction. The RCA transformation is hence invertible and the resulting Mahalanobis metric B is full rank. When useD is given RCA is preceded by constraints based Fisher Linear Discriminant. This is the optimal dimensionality reduction according to the information theoretic criterion. A in this case is of dimensions (original dimension)*(useD) and B is of rank useD. The arguments returned by this call are B - The RCA suggested Mahalanobis matrix. distances between data points x1,x2 should be computed by (x2-x1)'*B*(x2-x1) A - The RCA suggested transformation of the data. The data should be transformed by A*data. newData - The data after the RCA transformation (A). newData=A*data. The three returned argument are just different forms of the same output. If one is interested in a Mahalanobis metric over the original data space, the first argument is all he needs. If a transformation into another space (where one can use the Euclidean metric) is preferred, the second returned argument is sufficient. Using A and B is equivalent in the following sense : if y1=A*x1, y2=A*y2 then (x2-x1)'*B*(x2-x1) = (y2-y1)'*(y2-y1) Further help : A Theoretical analysis of RCA appears in the paper 'Learning distance functions using equivalence relations', In ICML 2003 The paper can be downloaded from http://www.cs.huji.ac.il/~aharonbh I can answer simple questions concerning the code at aharonbh@cs.huji.ac.il

近期下载者

相关文件


收藏者