精通MATLAB智能算法.温正(2015代码).zip

  • 紫火蓝冰
    了解作者
  • matlab
    开发工具
  • 7MB
    文件大小
  • zip
    文件格式
  • 0
    收藏次数
  • 10 积分
    下载积分
  • 42
    下载次数
  • 2018-11-10 20:38
    上传日期
精通MATLAB智能算法.温正(2015年),详细介绍了神经网络算法、粒子群算法、遗传算法、模糊逻辑控制、免疫算法、蚁群算法、小波分析算法及其MATLAB的实现方式等内容
精通MATLAB智能算法.温正(2015代码).zip
内容介绍
% pso_Trelea_vectorized.m % a generic particle swarm optimizer % to find the minimum or maximum of any % MISO matlab function % % Implements Common, Trelea type 1 and 2, and Clerc's class 1". It will % also automatically try to track to a changing environment (with varied % success - BKB 3/18/05) % % This vectorized version removes the for loop associated with particle % number. It also *requires* that the cost function have a single input % that represents all dimensions of search (i.e., for a function that has 2 % inputs then make a wrapper that passes a matrix of ps x 2 as a single % variable) % % Usage: % [optOUT]=PSO(functname,D) % or: % [optOUT,tr,te]=... % PSO(functname,D,mv,VarRange,minmax,PSOparams,plotfcn,PSOseedValue) % % Inputs: % functname - string of matlab function to optimize % D - # of inputs to the function (dimension of problem) % % Optional Inputs: % mv - max particle velocity, either a scalar or a vector of length D % (this allows each component to have it's own max velocity), % default = 4, set if not input or input as NaN % % VarRange - matrix of ranges for each input variable, % default -100 to 100, of form: % [ min1 max1 % min2 max2 % ... % minD maxD ] % % minmax = 0, funct minimized (default) % = 1, funct maximized % = 2, funct is targeted to P(12) (minimizes distance to errgoal) % % PSOparams - PSO parameters % P(1) - Epochs between updating display, default = 100. if 0, % no display % P(2) - Maximum number of iterations (epochs) to train, default = 2000. % P(3) - population size, default = 24 % % P(4) - acceleration const 1 (local best influence), default = 2 % P(5) - acceleration const 2 (global best influence), default = 2 % P(6) - Initial inertia weight, default = 0.9 % P(7) - Final inertia weight, default = 0.4 % P(8) - Epoch when inertial weight at final value, default = 1500 % P(9)- minimum global error gradient, % if abs(Gbest(i+1)-Gbest(i)) < gradient over % certain length of epochs, terminate run, default = 1e-25 % P(10)- epochs before error gradient criterion terminates run, % default = 150, if the SSE does not change over 250 epochs % then exit % P(11)- error goal, if NaN then unconstrained min or max, default=NaN % P(12)- type flag (which kind of PSO to use) % 0 = Common PSO w/intertia (default) % 1,2 = Trelea types 1,2 % 3 = Clerc's Constricted PSO, Type 1" % P(13)- PSOseed, default=0 % = 0 for initial positions all random % = 1 for initial particles as user input % % plotfcn - optional name of plotting function, default 'goplotpso', % make your own and put here % % PSOseedValue - initial particle position, depends on P(13), must be % set if P(13) is 1 or 2, not used for P(13)=0, needs to % be nXm where n<=ps, and m<=D % If n<ps and/or m<D then remaining values are set random % on Varrange % Outputs: % optOUT - optimal inputs and associated min/max output of function, of form: % [ bestin1 % bestin2 % ... % bestinD % bestOUT ] % % Optional Outputs: % tr - Gbest at every iteration, traces flight of swarm % te - epochs to train, returned as a vector 1:endepoch % % Example: out=pso_Trelea_vectorized('f6',2) % Brian Birge % Rev 3.3 % 2/18/06 function [OUT,varargout]=pso_Trelea_vectorized(functname,D,varargin) rand('state',sum(100*clock)); if nargin < 2 error('Not enough arguments.'); end % PSO PARAMETERS if nargin == 2 % only specified functname and D VRmin=ones(D,1)*-100; VRmax=ones(D,1)*100; VR=[VRmin,VRmax]; minmax = 0; P = []; mv = 4; plotfcn='goplotpso'; elseif nargin == 3 % specified functname, D, and mv VRmin=ones(D,1)*-100; VRmax=ones(D,1)*100; VR=[VRmin,VRmax]; minmax = 0; mv=varargin{1}; if isnan(mv) mv=4; end P = []; plotfcn='goplotpso'; elseif nargin == 4 % specified functname, D, mv, Varrange mv=varargin{1}; if isnan(mv) mv=4; end VR=varargin{2}; minmax = 0; P = []; plotfcn='goplotpso'; elseif nargin == 5 % Functname, D, mv, Varrange, and minmax mv=varargin{1}; if isnan(mv) mv=4; end VR=varargin{2}; minmax=varargin{3}; P = []; plotfcn='goplotpso'; elseif nargin == 6 % Functname, D, mv, Varrange, minmax, and psoparams mv=varargin{1}; if isnan(mv) mv=4; end VR=varargin{2}; minmax=varargin{3}; P = varargin{4}; % psoparams plotfcn='goplotpso'; elseif nargin == 7 % Functname, D, mv, Varrange, minmax, and psoparams, plotfcn mv=varargin{1}; if isnan(mv) mv=4; end VR=varargin{2}; minmax=varargin{3}; P = varargin{4}; % psoparams plotfcn = varargin{5}; elseif nargin == 8 % Functname, D, mv, Varrange, minmax, and psoparams, plotfcn, PSOseedValue mv=varargin{1}; if isnan(mv) mv=4; end VR=varargin{2}; minmax=varargin{3}; P = varargin{4}; % psoparams plotfcn = varargin{5}; PSOseedValue = varargin{6}; else error('Wrong # of input arguments.'); end % sets up default pso params Pdef = [100 2000 24 2 2 0.9 0.4 1500 1e-25 250 NaN 0 0]; Plen = length(P); P = [P,Pdef(Plen+1:end)]; df = P(1); me = P(2); ps = P(3); ac1 = P(4); ac2 = P(5); iw1 = P(6); iw2 = P(7); iwe = P(8); ergrd = P(9); ergrdep = P(10); errgoal = P(11); trelea = P(12); PSOseed = P(13); % used with trainpso, for neural net training if strcmp(functname,'pso_neteval') net = evalin('caller','net'); Pd = evalin('caller','Pd'); Tl = evalin('caller','Tl'); Ai = evalin('caller','Ai'); Q = evalin('caller','Q'); TS = evalin('caller','TS'); end % error checking if ((minmax==2) & isnan(errgoal)) error('minmax= 2, errgoal= NaN: choose an error goal or set minmax to 0 or 1'); end if ( (PSOseed==1) & ~exist('PSOseedValue') ) error('PSOseed flag set but no PSOseedValue was input'); end if exist('PSOseedValue') tmpsz=size(PSOseedValue); if D < tmpsz(2) error('PSOseedValue column size must be D or less'); end if ps < tmpsz(1) error('PSOseedValue row length must be # of particles or less'); end end % set plotting flag if (P(1))~=0 plotflg=1; else plotflg=0; end % preallocate variables for speed up tr = ones(1,me)*NaN; % take care of setting max velocity and position params here if length(mv)==1 velmaskmin = -mv*ones(ps,D); % min vel, psXD matrix velmaskmax = mv*ones(ps,D); % max vel elseif length(mv)==D velmaskmin = repmat(forcerow(-mv),ps,1); % min vel velmaskmax = repmat(forcerow( mv),ps,1); % max vel else error('Max vel must be either a scalar or same length as prob dimension D'); end posmaskmin = repmat(VR(1:D,1)',ps,1); % min pos, psXD matrix posmaskmax = repmat(VR(1:D,2)',ps,1); % max pos posmaskmeth = 3; % 3=bounce method (see comments below inside epoch loop) % PLOTTING message = sprintf('PSO: %%g/%g iterations, GBest = %%20.20g.\n',me); % INITIALIZE INITIALIZE INITIALIZE INITIALIZE INITIALIZE INITIALIZE % initialize population of particles and their velocities at time zero, % format of pos= (particle#, dimension) % construct random population positions bounded by VR pos(1:ps,1:D) = normmat(rand([ps,D]),VR',1); if PSOseed == 1 % initial positions user input, see comments above tmpsz = size(PSOseedValue); pos(1:tmpsz(1),1:tmpsz(2)) = PSOseedValue; end % construct initial random velocities between -mv,mv vel(1:ps,1:D) = normmat(rand([ps,D]),... [forcecol(-mv),forcecol(mv)]',1); % initial pbes
评论
    相关推荐
    • Hopfield网络 小波神经网络及其在控制与辨识
      两本神经网络方面的经典电子书人工神经网络导论.pdf 人工神经网络实用教程.pdf 神经网络是智能控制...反馈网络,Hopfield网络及其在字符识别中的应用,支持向量机及其故障诊断,小波神经网络及其在控制与辨识中的应用
    • MATLAB神经网络43个案例分析
      MATLAB神经网络43个案例分析
    • MATLAB 神经网络43个案例分析
      MATLAB 神经网络43个案例分析MATLAB 神经网络43个案例分析MATLAB 神经网络43个案例分析MATLAB 神经网络43个案例分析
    • MATLAB 神经网络43个案例分析
      内容涵盖常见的神经网络(BP、RBF、SOM、Hopfield、Elman、LVQ、Kohonen、GRNN、NARX等...此外,本书还介绍了MATLAB R2012b中神经网络工具箱的新增功能与特性,如神经网络并行计算、定制神经网络神经网络高效编程等。
    • 基于免疫小波神经网络PID的水轮机调速控制研究.rar
      基于免疫小波神经网络PID的水轮机调速控制研究,讲解免疫小波神经网络
    • MATLAB神经网络43个案例分析
      该书共有30个MATLAB神经网络的案例(含可运行程序),包括BP、RBF、SVM、SOM、Hopfield、LVQ、Elman、小波神经网络;还包含PSO(粒子群)、灰色神经网络模糊网络、概率神经网络、遗传算法优化等内容。该书另有31个...
    • MATLAB神经网络30个案例分析
      第21章 LVQ神经网络的分类——乳腺肿瘤诊断, 第22章 LVQ神经网络的预测——人脸朝向识别, 第23章 小波神经网络的时间序列预测——短时交通流量预测, 第24章 模糊神经网络的预测算法——嘉陵江水质评价, 第25章 广义...
    • MATLAB 神经网络43个案例分析
      包括BP、RBF、SVM、SOM、Hopfield、LVQ、Elman、小波神经网络;还包含PSO(粒子群)、灰色神经网络模糊网络、概率神经网络、遗传算法优化等内容
    • 多种神经网络算法分享
      BP神经网络,GRNN网络,PNN网络,RBF神经网络,遗传算法,蚁群算法,支持向量机,小波神经网络,等等共计42种算法,有算法说明和代码介绍
    • 神经网络入门级教材,简单易懂
      反馈网络,Hopfield网络及其在字符识别中的应用,支持向量机及其故障诊断,小波神经网络及其在控制与辨识中的应用。 本书内容全面,重点突出,以讲明基本概念和方法为主,尽量减少繁琐的数学推导,并给出一些结合...