Wang-Zhao-HWSharp1-22.zip

  • wijdoy
    了解作者
  • matlab
    开发工具
  • 1.5MB
    文件大小
  • zip
    文件格式
  • 0
    收藏次数
  • 10 积分
    下载积分
  • 1
    下载次数
  • 2020-04-16 08:49
    上传日期
里面包含EM算法讲解,EM算法相关论文和源代码。相信一定会让研究此算法的你眼前一亮。
Wang-Zhao-HWSharp1-22.zip
  • Wang-Zhao-HW#1-2
  • calculatePercentCorrect.m
    269B
  • test8.jpg
    13.1KB
  • test16.jpg
    10.7KB
  • GeorgesSeurat10.jpg
    13.4KB
  • solution1.m
    1.4KB
  • test5.jpg
    8.9KB
  • test17.jpg
    15.8KB
  • test3.jpg
    8.9KB
  • classifyWithDT.m
    721B
  • GeorgesSeurat8.jpg
    8.7KB
  • test6.jpg
    13.2KB
  • test10.jpg
    11.4KB
  • test2.jpg
    3.7KB
  • GeorgesSeurat6.jpg
    7.3KB
  • GeorgesSeurat5.jpg
    9.9KB
  • VanGogh10.jpg
    14.7KB
  • VanGogh3.jpg
    33.7KB
  • VanGogh7.jpg
    10.3KB
  • trainDT.m
    3KB
  • test7.jpg
    10.5KB
  • calc_stats.m
    1.5KB
  • test9.jpg
    8KB
  • VanGogh5.jpg
    17.6KB
  • test15.jpg
    9.9KB
  • test4.jpg
    10.5KB
  • test19.jpg
    13.3KB
  • test11.jpg
    8.4KB
  • VanGogh2.jpg
    12.9KB
  • VanGogh4.jpg
    44.8KB
  • VanGogh8.jpg
    13KB
  • VanGogh1.jpg
    18KB
  • batchClassifyWithDT.m
    642B
  • GeorgesSeurat1.jpg
    200.9KB
  • test1.jpg
    11.7KB
  • GeorgesSeurat3.jpg
    13.6KB
  • GeorgesSeurat4.jpg
    9.3KB
  • test18.jpg
    17.5KB
  • computeOptimalSplit.m
    1.7KB
  • GeorgesSeurat2.jpg
    837.1KB
  • test20.jpg
    9KB
  • VanGogh6.jpg
    11.6KB
  • GeorgesSeurat9.jpg
    14KB
  • computeAccuracyofTestSamples.m
    1.1KB
  • test14.jpg
    16.6KB
  • VanGogh9.jpg
    9.9KB
  • solution2.m
    1.4KB
  • GeorgesSeurat7.jpg
    11.2KB
  • test12.jpg
    12.8KB
  • test13.jpg
    16.6KB
内容介绍
% This is a function to train a binary decision tree. Feature values can % be either real valued or discrete. % % Usage: node = trainDT(x, y) % % Inputs: % % x - a S by D matrix, where S is the number of samples and D is the % length of a feature vector. x(s,:) gives the feature vector for % sample s. % % y - an S by 1 array. y(s) gives the label for sample s. % % Outputs: % % node - a structure containing the root node of the decision tree. % The decision tree returned by this function is simply a nested set of % nodes. Nodes can be one of two types: a leaf node OR the root of % another tree. % % If a node is a leaf node, it will be a structure with the following % fields: % % isLeaf - a field with value true, stating the node is a leaf node % % label - the label for samples which follow the tree down to this % leaf node. % % trainData - 2 by 1 array. trainData(1) gives the label for % this leaf node and trainData(2) gives the number of training % samples that were classified according to the path for this leaf. % % If a node is not a leaf node, it will be the root of another tree, and % will have the following fields: % % isLeaf - a field with value false, stating the node is not a leaf % node and simply the root of another tree % % attr - the index of the attribute of the feature vector to split on % for this node. % % thresh - the threshold value to split on for this node. % % child1 - a structure for the root node used to further classify the feature % vector for a sample, x, when x(attr) <= thresh % % child2 - a structure for the root node used to further classify the % feature vector for a sample, x, when x(attr) > thresh. % % trainData - a 2 row matrix. The top row will list the labels for % all training data points that were classified under this node. The % bottom row will list the number of training points for each label % that were classified under this node. % function node = trainDT(x, y) % Make sure we have consistent training data nSmps = size(x,1); for s = 1:nSmps matchRows = all(repmat(x(s,:), [nSmps, 1]) == x, 2); matchedLabels = y(matchRows); assert(all(matchedLabels == matchedLabels(1)), 'Training data is not consistent.'); end % Determine if we have a pure node pureNode = all(y == y(1)); if pureNode node.isLeaf = true; node.label = y(1); node.trainData = [y(1); length(y)]; else % Find optimal split for the data [attr, thresh] = computeOptimalSplit(x, y); indsLessThanOrEqual = (x(:, attr) <= thresh); indsAbove = (x(:, attr) > thresh); node.isLeaf = false; node.attr = attr; node.thresh = thresh; node.child1 = trainDT(x(indsLessThanOrEqual,:), y(indsLessThanOrEqual)); node.child2 = trainDT(x(indsAbove,:), y(indsAbove)); % Store information on training data for this root uniqueLabels = unique(y); node.trainData = [uniqueLabels'; histc(y, uniqueLabels)']; end
评论
    相关推荐
    • EM算法.rar
      EM algerithm is a good algerithm
    • EM算法硬币.zip
      EM算法,抛掷、投掷硬币问题,迭代求解E步、M步
    • EM算法
      EM算法
    • EM算法程序
      里面包含EM算法讲解,EM算法相关论文和源代码。相信一定会让研究此算法的你眼前一亮。
    • EM算法简介
      EM算法的每次迭代由两步组成,E步根据参数初始值或上一次迭代的模型参数来计算出隐性变量的后验概 率,其实就是隐性变量的期望,作为隐藏变量的现估计值; M步将似然函数最大化以获得新的参数值。
    • em算法emememememem
      em算法ememememememem算法emememem算法ememememememememem
    • em 算法源码
      该算法,是基本的Em算法,可以直接调用,经过试验真的很不错,另外里面还有很多源代码,每个源代码都可以直接调用,算法中含有比较细致的描述。
    • 统计学EM算法
      统计学中EM算法的matlab实现,输入可以自己修改或者直接输入mat文件
    • EM算法GMM算法
      改文件包中包含EM算法,已经使用GMM算法进行参数估计,并同时示例进行分类训练和预测
    • EM 算法实现
      包含EM算法的一些原理和代码,比较适合初学者